2026-01-02 00:00:08.061345 | Job console starting 2026-01-02 00:00:08.083669 | Updating git repos 2026-01-02 00:00:08.231510 | Cloning repos into workspace 2026-01-02 00:00:08.630466 | Restoring repo states 2026-01-02 00:00:08.687641 | Merging changes 2026-01-02 00:00:08.687663 | Checking out repos 2026-01-02 00:00:09.143007 | Preparing playbooks 2026-01-02 00:00:10.511555 | Running Ansible setup 2026-01-02 00:00:19.833617 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2026-01-02 00:00:22.635322 | 2026-01-02 00:00:22.635525 | PLAY [Base pre] 2026-01-02 00:00:22.751018 | 2026-01-02 00:00:22.751251 | TASK [Setup log path fact] 2026-01-02 00:00:22.809201 | orchestrator | ok 2026-01-02 00:00:22.909500 | 2026-01-02 00:00:22.909697 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-01-02 00:00:22.985016 | orchestrator | ok 2026-01-02 00:00:23.066465 | 2026-01-02 00:00:23.066622 | TASK [emit-job-header : Print job information] 2026-01-02 00:00:23.155948 | # Job Information 2026-01-02 00:00:23.156153 | Ansible Version: 2.16.14 2026-01-02 00:00:23.156211 | Job: testbed-deploy-next-in-a-nutshell-with-tempest-ubuntu-24.04 2026-01-02 00:00:23.156249 | Pipeline: periodic-midnight 2026-01-02 00:00:23.156275 | Executor: 521e9411259a 2026-01-02 00:00:23.156297 | Triggered by: https://github.com/osism/testbed 2026-01-02 00:00:23.156320 | Event ID: dd79a623558a426383a27ac1b46d7c38 2026-01-02 00:00:23.219747 | 2026-01-02 00:00:23.219902 | LOOP [emit-job-header : Print node information] 2026-01-02 00:00:24.127644 | orchestrator | ok: 2026-01-02 00:00:24.127850 | orchestrator | # Node Information 2026-01-02 00:00:24.127885 | orchestrator | Inventory Hostname: orchestrator 2026-01-02 00:00:24.127911 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2026-01-02 00:00:24.127934 | orchestrator | Username: zuul-testbed05 2026-01-02 00:00:24.127955 | orchestrator | Distro: Debian 12.12 2026-01-02 00:00:24.127979 | orchestrator | Provider: static-testbed 2026-01-02 00:00:24.128001 | orchestrator | Region: 2026-01-02 00:00:24.128022 | orchestrator | Label: testbed-orchestrator 2026-01-02 00:00:24.128042 | orchestrator | Product Name: OpenStack Nova 2026-01-02 00:00:24.128060 | orchestrator | Interface IP: 81.163.193.140 2026-01-02 00:00:24.164729 | 2026-01-02 00:00:24.164893 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2026-01-02 00:00:26.915428 | orchestrator -> localhost | changed 2026-01-02 00:00:26.924856 | 2026-01-02 00:00:26.931258 | TASK [log-inventory : Copy ansible inventory to logs dir] 2026-01-02 00:00:32.039970 | orchestrator -> localhost | changed 2026-01-02 00:00:32.054761 | 2026-01-02 00:00:32.054871 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2026-01-02 00:00:32.900792 | orchestrator -> localhost | ok 2026-01-02 00:00:32.907300 | 2026-01-02 00:00:32.907400 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2026-01-02 00:00:32.979504 | orchestrator | ok 2026-01-02 00:00:33.039900 | orchestrator | included: /var/lib/zuul/builds/96a496e8a76b473f95201e7dbfdd1770/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2026-01-02 00:00:33.056903 | 2026-01-02 00:00:33.057007 | TASK [add-build-sshkey : Create Temp SSH key] 2026-01-02 00:00:37.216894 | orchestrator -> localhost | Generating public/private rsa key pair. 2026-01-02 00:00:37.217091 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/96a496e8a76b473f95201e7dbfdd1770/work/96a496e8a76b473f95201e7dbfdd1770_id_rsa 2026-01-02 00:00:37.217129 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/96a496e8a76b473f95201e7dbfdd1770/work/96a496e8a76b473f95201e7dbfdd1770_id_rsa.pub 2026-01-02 00:00:37.217155 | orchestrator -> localhost | The key fingerprint is: 2026-01-02 00:00:37.217179 | orchestrator -> localhost | SHA256:XT06+49DMf3mVm36oaWm/wxSRsmj586qU9RXmAJA1Y0 zuul-build-sshkey 2026-01-02 00:00:37.217217 | orchestrator -> localhost | The key's randomart image is: 2026-01-02 00:00:37.217252 | orchestrator -> localhost | +---[RSA 3072]----+ 2026-01-02 00:00:37.217275 | orchestrator -> localhost | | .oooo o o | 2026-01-02 00:00:37.217297 | orchestrator -> localhost | | E.= .| 2026-01-02 00:00:37.217318 | orchestrator -> localhost | | o*o..| 2026-01-02 00:00:37.217338 | orchestrator -> localhost | | . ooo+o.| 2026-01-02 00:00:37.217357 | orchestrator -> localhost | | S o.o+.oo| 2026-01-02 00:00:37.217380 | orchestrator -> localhost | | .=o. *| 2026-01-02 00:00:37.217401 | orchestrator -> localhost | | ...+ B.| 2026-01-02 00:00:37.217422 | orchestrator -> localhost | | . +oO.+| 2026-01-02 00:00:37.217442 | orchestrator -> localhost | | .oo=*+Bo| 2026-01-02 00:00:37.217462 | orchestrator -> localhost | +----[SHA256]-----+ 2026-01-02 00:00:37.217510 | orchestrator -> localhost | ok: Runtime: 0:00:02.307425 2026-01-02 00:00:37.224465 | 2026-01-02 00:00:37.224561 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2026-01-02 00:00:37.278962 | orchestrator | ok 2026-01-02 00:00:37.298373 | orchestrator | included: /var/lib/zuul/builds/96a496e8a76b473f95201e7dbfdd1770/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2026-01-02 00:00:37.318061 | 2026-01-02 00:00:37.319475 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2026-01-02 00:00:37.367344 | orchestrator | skipping: Conditional result was False 2026-01-02 00:00:37.377922 | 2026-01-02 00:00:37.378028 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2026-01-02 00:00:38.109535 | orchestrator | changed 2026-01-02 00:00:38.131455 | 2026-01-02 00:00:38.131569 | TASK [add-build-sshkey : Make sure user has a .ssh] 2026-01-02 00:00:38.463936 | orchestrator | ok 2026-01-02 00:00:38.470306 | 2026-01-02 00:00:38.470403 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2026-01-02 00:00:39.100306 | orchestrator | ok 2026-01-02 00:00:39.110768 | 2026-01-02 00:00:39.113362 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2026-01-02 00:00:39.771801 | orchestrator | ok 2026-01-02 00:00:39.779581 | 2026-01-02 00:00:39.779688 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2026-01-02 00:00:39.832118 | orchestrator | skipping: Conditional result was False 2026-01-02 00:00:39.837872 | 2026-01-02 00:00:39.837965 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2026-01-02 00:00:41.511192 | orchestrator -> localhost | changed 2026-01-02 00:00:41.546235 | 2026-01-02 00:00:41.546352 | TASK [add-build-sshkey : Add back temp key] 2026-01-02 00:00:42.279322 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/96a496e8a76b473f95201e7dbfdd1770/work/96a496e8a76b473f95201e7dbfdd1770_id_rsa (zuul-build-sshkey) 2026-01-02 00:00:42.279546 | orchestrator -> localhost | ok: Runtime: 0:00:00.008201 2026-01-02 00:00:42.287824 | 2026-01-02 00:00:42.287928 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2026-01-02 00:00:43.273837 | orchestrator | ok 2026-01-02 00:00:43.296531 | 2026-01-02 00:00:43.296644 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2026-01-02 00:00:43.414646 | orchestrator | skipping: Conditional result was False 2026-01-02 00:00:43.707258 | 2026-01-02 00:00:43.717614 | TASK [start-zuul-console : Start zuul_console daemon.] 2026-01-02 00:00:44.645451 | orchestrator | ok 2026-01-02 00:00:44.666759 | 2026-01-02 00:00:44.666894 | TASK [validate-host : Define zuul_info_dir fact] 2026-01-02 00:00:44.720970 | orchestrator | ok 2026-01-02 00:00:44.728051 | 2026-01-02 00:00:44.728159 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2026-01-02 00:00:46.082418 | orchestrator -> localhost | ok 2026-01-02 00:00:46.091413 | 2026-01-02 00:00:46.091539 | TASK [validate-host : Collect information about the host] 2026-01-02 00:00:49.343094 | orchestrator | ok 2026-01-02 00:00:49.452419 | 2026-01-02 00:00:49.452584 | TASK [validate-host : Sanitize hostname] 2026-01-02 00:00:49.600643 | orchestrator | ok 2026-01-02 00:00:49.608971 | 2026-01-02 00:00:49.609101 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2026-01-02 00:00:53.141493 | orchestrator -> localhost | changed 2026-01-02 00:00:53.148706 | 2026-01-02 00:00:53.149044 | TASK [validate-host : Collect information about zuul worker] 2026-01-02 00:00:54.404102 | orchestrator | ok 2026-01-02 00:00:54.445543 | 2026-01-02 00:00:54.447341 | TASK [validate-host : Write out all zuul information for each host] 2026-01-02 00:00:58.358548 | orchestrator -> localhost | changed 2026-01-02 00:00:58.369726 | 2026-01-02 00:00:58.369815 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2026-01-02 00:00:58.746971 | orchestrator | ok 2026-01-02 00:00:58.752156 | 2026-01-02 00:00:58.752268 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2026-01-02 00:02:18.535403 | orchestrator | changed: 2026-01-02 00:02:18.535656 | orchestrator | .d..t...... src/ 2026-01-02 00:02:18.535691 | orchestrator | .d..t...... src/github.com/ 2026-01-02 00:02:18.535716 | orchestrator | .d..t...... src/github.com/osism/ 2026-01-02 00:02:18.535737 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2026-01-02 00:02:18.535758 | orchestrator | RedHat.yml 2026-01-02 00:02:18.550479 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2026-01-02 00:02:18.550496 | orchestrator | RedHat.yml 2026-01-02 00:02:18.550548 | orchestrator | = 1.53.0"... 2026-01-02 00:02:34.549656 | orchestrator | - Finding hashicorp/local versions matching ">= 2.2.0"... 2026-01-02 00:02:34.665693 | orchestrator | - Installing hashicorp/null v3.2.4... 2026-01-02 00:02:35.189794 | orchestrator | - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2026-01-02 00:02:35.262841 | orchestrator | - Installing terraform-provider-openstack/openstack v3.4.0... 2026-01-02 00:02:36.012442 | orchestrator | - Installed terraform-provider-openstack/openstack v3.4.0 (signed, key ID 4F80527A391BEFD2) 2026-01-02 00:02:36.085730 | orchestrator | - Installing hashicorp/local v2.6.1... 2026-01-02 00:02:36.573807 | orchestrator | - Installed hashicorp/local v2.6.1 (signed, key ID 0C0AF313E5FD9F80) 2026-01-02 00:02:36.573893 | orchestrator | 2026-01-02 00:02:36.573901 | orchestrator | Providers are signed by their developers. 2026-01-02 00:02:36.573906 | orchestrator | If you'd like to know more about provider signing, you can read about it here: 2026-01-02 00:02:36.573911 | orchestrator | https://opentofu.org/docs/cli/plugins/signing/ 2026-01-02 00:02:36.573919 | orchestrator | 2026-01-02 00:02:36.573924 | orchestrator | OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2026-01-02 00:02:36.573929 | orchestrator | selections it made above. Include this file in your version control repository 2026-01-02 00:02:36.573947 | orchestrator | so that OpenTofu can guarantee to make the same selections by default when 2026-01-02 00:02:36.573952 | orchestrator | you run "tofu init" in the future. 2026-01-02 00:02:36.574267 | orchestrator | 2026-01-02 00:02:36.574295 | orchestrator | OpenTofu has been successfully initialized! 2026-01-02 00:02:36.574302 | orchestrator | 2026-01-02 00:02:36.574306 | orchestrator | You may now begin working with OpenTofu. Try running "tofu plan" to see 2026-01-02 00:02:36.574310 | orchestrator | any changes that are required for your infrastructure. All OpenTofu commands 2026-01-02 00:02:36.574325 | orchestrator | should now work. 2026-01-02 00:02:36.574329 | orchestrator | 2026-01-02 00:02:36.574333 | orchestrator | If you ever set or change modules or backend configuration for OpenTofu, 2026-01-02 00:02:36.574337 | orchestrator | rerun this command to reinitialize your working directory. If you forget, other 2026-01-02 00:02:36.574342 | orchestrator | commands will detect it and remind you to do so if necessary. 2026-01-02 00:02:36.740390 | orchestrator | Created and switched to workspace "ci"! 2026-01-02 00:02:36.740467 | orchestrator | 2026-01-02 00:02:36.740474 | orchestrator | You're now on a new, empty workspace. Workspaces isolate their state, 2026-01-02 00:02:36.740481 | orchestrator | so if you run "tofu plan" OpenTofu will not see any existing state 2026-01-02 00:02:36.740486 | orchestrator | for this configuration. 2026-01-02 00:02:36.843360 | orchestrator | ci.auto.tfvars 2026-01-02 00:02:36.846487 | orchestrator | default_custom.tf 2026-01-02 00:02:37.870096 | orchestrator | data.openstack_networking_network_v2.public: Reading... 2026-01-02 00:02:38.458072 | orchestrator | data.openstack_networking_network_v2.public: Read complete after 0s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2026-01-02 00:02:38.688804 | orchestrator | 2026-01-02 00:02:38.688864 | orchestrator | OpenTofu used the selected providers to generate the following execution 2026-01-02 00:02:38.688873 | orchestrator | plan. Resource actions are indicated with the following symbols: 2026-01-02 00:02:38.688886 | orchestrator | + create 2026-01-02 00:02:38.688891 | orchestrator | <= read (data resources) 2026-01-02 00:02:38.688896 | orchestrator | 2026-01-02 00:02:38.688900 | orchestrator | OpenTofu will perform the following actions: 2026-01-02 00:02:38.688947 | orchestrator | 2026-01-02 00:02:38.688955 | orchestrator | # data.openstack_images_image_v2.image will be read during apply 2026-01-02 00:02:38.688961 | orchestrator | # (config refers to values not yet known) 2026-01-02 00:02:38.688965 | orchestrator | <= data "openstack_images_image_v2" "image" { 2026-01-02 00:02:38.688969 | orchestrator | + checksum = (known after apply) 2026-01-02 00:02:38.688974 | orchestrator | + created_at = (known after apply) 2026-01-02 00:02:38.688978 | orchestrator | + file = (known after apply) 2026-01-02 00:02:38.688981 | orchestrator | + id = (known after apply) 2026-01-02 00:02:38.689010 | orchestrator | + metadata = (known after apply) 2026-01-02 00:02:38.689014 | orchestrator | + min_disk_gb = (known after apply) 2026-01-02 00:02:38.689018 | orchestrator | + min_ram_mb = (known after apply) 2026-01-02 00:02:38.689022 | orchestrator | + most_recent = true 2026-01-02 00:02:38.689026 | orchestrator | + name = (known after apply) 2026-01-02 00:02:38.689030 | orchestrator | + protected = (known after apply) 2026-01-02 00:02:38.689034 | orchestrator | + region = (known after apply) 2026-01-02 00:02:38.689042 | orchestrator | + schema = (known after apply) 2026-01-02 00:02:38.689046 | orchestrator | + size_bytes = (known after apply) 2026-01-02 00:02:38.689049 | orchestrator | + tags = (known after apply) 2026-01-02 00:02:38.689053 | orchestrator | + updated_at = (known after apply) 2026-01-02 00:02:38.689057 | orchestrator | } 2026-01-02 00:02:38.689063 | orchestrator | 2026-01-02 00:02:38.689066 | orchestrator | # data.openstack_images_image_v2.image_node will be read during apply 2026-01-02 00:02:38.689070 | orchestrator | # (config refers to values not yet known) 2026-01-02 00:02:38.689074 | orchestrator | <= data "openstack_images_image_v2" "image_node" { 2026-01-02 00:02:38.689078 | orchestrator | + checksum = (known after apply) 2026-01-02 00:02:38.689082 | orchestrator | + created_at = (known after apply) 2026-01-02 00:02:38.689085 | orchestrator | + file = (known after apply) 2026-01-02 00:02:38.689089 | orchestrator | + id = (known after apply) 2026-01-02 00:02:38.689093 | orchestrator | + metadata = (known after apply) 2026-01-02 00:02:38.689097 | orchestrator | + min_disk_gb = (known after apply) 2026-01-02 00:02:38.689100 | orchestrator | + min_ram_mb = (known after apply) 2026-01-02 00:02:38.689104 | orchestrator | + most_recent = true 2026-01-02 00:02:38.689108 | orchestrator | + name = (known after apply) 2026-01-02 00:02:38.689112 | orchestrator | + protected = (known after apply) 2026-01-02 00:02:38.689115 | orchestrator | + region = (known after apply) 2026-01-02 00:02:38.689119 | orchestrator | + schema = (known after apply) 2026-01-02 00:02:38.689123 | orchestrator | + size_bytes = (known after apply) 2026-01-02 00:02:38.689126 | orchestrator | + tags = (known after apply) 2026-01-02 00:02:38.689130 | orchestrator | + updated_at = (known after apply) 2026-01-02 00:02:38.689134 | orchestrator | } 2026-01-02 00:02:38.689197 | orchestrator | 2026-01-02 00:02:38.689204 | orchestrator | # local_file.MANAGER_ADDRESS will be created 2026-01-02 00:02:38.689208 | orchestrator | + resource "local_file" "MANAGER_ADDRESS" { 2026-01-02 00:02:38.689212 | orchestrator | + content = (known after apply) 2026-01-02 00:02:38.689216 | orchestrator | + content_base64sha256 = (known after apply) 2026-01-02 00:02:38.689220 | orchestrator | + content_base64sha512 = (known after apply) 2026-01-02 00:02:38.689224 | orchestrator | + content_md5 = (known after apply) 2026-01-02 00:02:38.689228 | orchestrator | + content_sha1 = (known after apply) 2026-01-02 00:02:38.689231 | orchestrator | + content_sha256 = (known after apply) 2026-01-02 00:02:38.689235 | orchestrator | + content_sha512 = (known after apply) 2026-01-02 00:02:38.689239 | orchestrator | + directory_permission = "0777" 2026-01-02 00:02:38.689243 | orchestrator | + file_permission = "0644" 2026-01-02 00:02:38.689246 | orchestrator | + filename = ".MANAGER_ADDRESS.ci" 2026-01-02 00:02:38.689250 | orchestrator | + id = (known after apply) 2026-01-02 00:02:38.689254 | orchestrator | } 2026-01-02 00:02:38.689302 | orchestrator | 2026-01-02 00:02:38.689307 | orchestrator | # local_file.id_rsa_pub will be created 2026-01-02 00:02:38.689324 | orchestrator | + resource "local_file" "id_rsa_pub" { 2026-01-02 00:02:38.689328 | orchestrator | + content = (known after apply) 2026-01-02 00:02:38.689332 | orchestrator | + content_base64sha256 = (known after apply) 2026-01-02 00:02:38.689335 | orchestrator | + content_base64sha512 = (known after apply) 2026-01-02 00:02:38.689339 | orchestrator | + content_md5 = (known after apply) 2026-01-02 00:02:38.689343 | orchestrator | + content_sha1 = (known after apply) 2026-01-02 00:02:38.689346 | orchestrator | + content_sha256 = (known after apply) 2026-01-02 00:02:38.689350 | orchestrator | + content_sha512 = (known after apply) 2026-01-02 00:02:38.689354 | orchestrator | + directory_permission = "0777" 2026-01-02 00:02:38.689358 | orchestrator | + file_permission = "0644" 2026-01-02 00:02:38.689383 | orchestrator | + filename = ".id_rsa.ci.pub" 2026-01-02 00:02:38.689387 | orchestrator | + id = (known after apply) 2026-01-02 00:02:38.689391 | orchestrator | } 2026-01-02 00:02:38.692216 | orchestrator | 2026-01-02 00:02:38.692402 | orchestrator | # local_file.inventory will be created 2026-01-02 00:02:38.692418 | orchestrator | + resource "local_file" "inventory" { 2026-01-02 00:02:38.692429 | orchestrator | + content = (known after apply) 2026-01-02 00:02:38.692438 | orchestrator | + content_base64sha256 = (known after apply) 2026-01-02 00:02:38.692446 | orchestrator | + content_base64sha512 = (known after apply) 2026-01-02 00:02:38.692455 | orchestrator | + content_md5 = (known after apply) 2026-01-02 00:02:38.692463 | orchestrator | + content_sha1 = (known after apply) 2026-01-02 00:02:38.692475 | orchestrator | + content_sha256 = (known after apply) 2026-01-02 00:02:38.692483 | orchestrator | + content_sha512 = (known after apply) 2026-01-02 00:02:38.692490 | orchestrator | + directory_permission = "0777" 2026-01-02 00:02:38.692498 | orchestrator | + file_permission = "0644" 2026-01-02 00:02:38.692506 | orchestrator | + filename = "inventory.ci" 2026-01-02 00:02:38.692514 | orchestrator | + id = (known after apply) 2026-01-02 00:02:38.692522 | orchestrator | } 2026-01-02 00:02:38.692531 | orchestrator | 2026-01-02 00:02:38.692539 | orchestrator | # local_sensitive_file.id_rsa will be created 2026-01-02 00:02:38.692547 | orchestrator | + resource "local_sensitive_file" "id_rsa" { 2026-01-02 00:02:38.692555 | orchestrator | + content = (sensitive value) 2026-01-02 00:02:38.692563 | orchestrator | + content_base64sha256 = (known after apply) 2026-01-02 00:02:38.692570 | orchestrator | + content_base64sha512 = (known after apply) 2026-01-02 00:02:38.692578 | orchestrator | + content_md5 = (known after apply) 2026-01-02 00:02:38.692586 | orchestrator | + content_sha1 = (known after apply) 2026-01-02 00:02:38.692594 | orchestrator | + content_sha256 = (known after apply) 2026-01-02 00:02:38.692602 | orchestrator | + content_sha512 = (known after apply) 2026-01-02 00:02:38.692610 | orchestrator | + directory_permission = "0700" 2026-01-02 00:02:38.692618 | orchestrator | + file_permission = "0600" 2026-01-02 00:02:38.692626 | orchestrator | + filename = ".id_rsa.ci" 2026-01-02 00:02:38.692634 | orchestrator | + id = (known after apply) 2026-01-02 00:02:38.692641 | orchestrator | } 2026-01-02 00:02:38.692649 | orchestrator | 2026-01-02 00:02:38.692657 | orchestrator | # null_resource.node_semaphore will be created 2026-01-02 00:02:38.692665 | orchestrator | + resource "null_resource" "node_semaphore" { 2026-01-02 00:02:38.692673 | orchestrator | + id = (known after apply) 2026-01-02 00:02:38.692681 | orchestrator | } 2026-01-02 00:02:38.692689 | orchestrator | 2026-01-02 00:02:38.692698 | orchestrator | # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2026-01-02 00:02:38.692707 | orchestrator | + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2026-01-02 00:02:38.692715 | orchestrator | + attachment = (known after apply) 2026-01-02 00:02:38.692723 | orchestrator | + availability_zone = "nova" 2026-01-02 00:02:38.692731 | orchestrator | + id = (known after apply) 2026-01-02 00:02:38.692739 | orchestrator | + image_id = (known after apply) 2026-01-02 00:02:38.692746 | orchestrator | + metadata = (known after apply) 2026-01-02 00:02:38.692755 | orchestrator | + name = "testbed-volume-manager-base" 2026-01-02 00:02:38.692762 | orchestrator | + region = (known after apply) 2026-01-02 00:02:38.692770 | orchestrator | + size = 80 2026-01-02 00:02:38.692778 | orchestrator | + volume_retype_policy = "never" 2026-01-02 00:02:38.692786 | orchestrator | + volume_type = "ssd" 2026-01-02 00:02:38.692794 | orchestrator | } 2026-01-02 00:02:38.692801 | orchestrator | 2026-01-02 00:02:38.692809 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2026-01-02 00:02:38.692817 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-01-02 00:02:38.692825 | orchestrator | + attachment = (known after apply) 2026-01-02 00:02:38.692833 | orchestrator | + availability_zone = "nova" 2026-01-02 00:02:38.692841 | orchestrator | + id = (known after apply) 2026-01-02 00:02:38.692866 | orchestrator | + image_id = (known after apply) 2026-01-02 00:02:38.692874 | orchestrator | + metadata = (known after apply) 2026-01-02 00:02:38.692882 | orchestrator | + name = "testbed-volume-0-node-base" 2026-01-02 00:02:38.692890 | orchestrator | + region = (known after apply) 2026-01-02 00:02:38.692898 | orchestrator | + size = 80 2026-01-02 00:02:38.692906 | orchestrator | + volume_retype_policy = "never" 2026-01-02 00:02:38.692914 | orchestrator | + volume_type = "ssd" 2026-01-02 00:02:38.692921 | orchestrator | } 2026-01-02 00:02:38.692929 | orchestrator | 2026-01-02 00:02:38.692937 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2026-01-02 00:02:38.692945 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-01-02 00:02:38.692953 | orchestrator | + attachment = (known after apply) 2026-01-02 00:02:38.692961 | orchestrator | + availability_zone = "nova" 2026-01-02 00:02:38.692969 | orchestrator | + id = (known after apply) 2026-01-02 00:02:38.692977 | orchestrator | + image_id = (known after apply) 2026-01-02 00:02:38.692984 | orchestrator | + metadata = (known after apply) 2026-01-02 00:02:38.692992 | orchestrator | + name = "testbed-volume-1-node-base" 2026-01-02 00:02:38.693000 | orchestrator | + region = (known after apply) 2026-01-02 00:02:38.693008 | orchestrator | + size = 80 2026-01-02 00:02:38.693016 | orchestrator | + volume_retype_policy = "never" 2026-01-02 00:02:38.693023 | orchestrator | + volume_type = "ssd" 2026-01-02 00:02:38.693031 | orchestrator | } 2026-01-02 00:02:38.693039 | orchestrator | 2026-01-02 00:02:38.693056 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2026-01-02 00:02:38.693064 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-01-02 00:02:38.693072 | orchestrator | + attachment = (known after apply) 2026-01-02 00:02:38.693080 | orchestrator | + availability_zone = "nova" 2026-01-02 00:02:38.693087 | orchestrator | + id = (known after apply) 2026-01-02 00:02:38.693095 | orchestrator | + image_id = (known after apply) 2026-01-02 00:02:38.693103 | orchestrator | + metadata = (known after apply) 2026-01-02 00:02:38.693111 | orchestrator | + name = "testbed-volume-2-node-base" 2026-01-02 00:02:38.693118 | orchestrator | + region = (known after apply) 2026-01-02 00:02:38.693126 | orchestrator | + size = 80 2026-01-02 00:02:38.693134 | orchestrator | + volume_retype_policy = "never" 2026-01-02 00:02:38.693142 | orchestrator | + volume_type = "ssd" 2026-01-02 00:02:38.693149 | orchestrator | } 2026-01-02 00:02:38.693157 | orchestrator | 2026-01-02 00:02:38.693165 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2026-01-02 00:02:38.693173 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-01-02 00:02:38.693181 | orchestrator | + attachment = (known after apply) 2026-01-02 00:02:38.693189 | orchestrator | + availability_zone = "nova" 2026-01-02 00:02:38.693212 | orchestrator | + id = (known after apply) 2026-01-02 00:02:38.693220 | orchestrator | + image_id = (known after apply) 2026-01-02 00:02:38.693228 | orchestrator | + metadata = (known after apply) 2026-01-02 00:02:38.693240 | orchestrator | + name = "testbed-volume-3-node-base" 2026-01-02 00:02:38.693248 | orchestrator | + region = (known after apply) 2026-01-02 00:02:38.693256 | orchestrator | + size = 80 2026-01-02 00:02:38.693264 | orchestrator | + volume_retype_policy = "never" 2026-01-02 00:02:38.693272 | orchestrator | + volume_type = "ssd" 2026-01-02 00:02:38.693280 | orchestrator | } 2026-01-02 00:02:38.693288 | orchestrator | 2026-01-02 00:02:38.693296 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2026-01-02 00:02:38.693304 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-01-02 00:02:38.693329 | orchestrator | + attachment = (known after apply) 2026-01-02 00:02:38.693338 | orchestrator | + availability_zone = "nova" 2026-01-02 00:02:38.693346 | orchestrator | + id = (known after apply) 2026-01-02 00:02:38.693360 | orchestrator | + image_id = (known after apply) 2026-01-02 00:02:38.693368 | orchestrator | + metadata = (known after apply) 2026-01-02 00:02:38.693376 | orchestrator | + name = "testbed-volume-4-node-base" 2026-01-02 00:02:38.693384 | orchestrator | + region = (known after apply) 2026-01-02 00:02:38.693391 | orchestrator | + size = 80 2026-01-02 00:02:38.693399 | orchestrator | + volume_retype_policy = "never" 2026-01-02 00:02:38.693407 | orchestrator | + volume_type = "ssd" 2026-01-02 00:02:38.693415 | orchestrator | } 2026-01-02 00:02:38.693423 | orchestrator | 2026-01-02 00:02:38.693431 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2026-01-02 00:02:38.693439 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-01-02 00:02:38.693446 | orchestrator | + attachment = (known after apply) 2026-01-02 00:02:38.693454 | orchestrator | + availability_zone = "nova" 2026-01-02 00:02:38.693462 | orchestrator | + id = (known after apply) 2026-01-02 00:02:38.693470 | orchestrator | + image_id = (known after apply) 2026-01-02 00:02:38.693478 | orchestrator | + metadata = (known after apply) 2026-01-02 00:02:38.693486 | orchestrator | + name = "testbed-volume-5-node-base" 2026-01-02 00:02:38.693494 | orchestrator | + region = (known after apply) 2026-01-02 00:02:38.693501 | orchestrator | + size = 80 2026-01-02 00:02:38.693509 | orchestrator | + volume_retype_policy = "never" 2026-01-02 00:02:38.693517 | orchestrator | + volume_type = "ssd" 2026-01-02 00:02:38.693525 | orchestrator | } 2026-01-02 00:02:38.693533 | orchestrator | 2026-01-02 00:02:38.693541 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[0] will be created 2026-01-02 00:02:38.693549 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-02 00:02:38.693557 | orchestrator | + attachment = (known after apply) 2026-01-02 00:02:38.693565 | orchestrator | + availability_zone = "nova" 2026-01-02 00:02:38.693573 | orchestrator | + id = (known after apply) 2026-01-02 00:02:38.693581 | orchestrator | + metadata = (known after apply) 2026-01-02 00:02:38.693589 | orchestrator | + name = "testbed-volume-0-node-3" 2026-01-02 00:02:38.693597 | orchestrator | + region = (known after apply) 2026-01-02 00:02:38.693605 | orchestrator | + size = 20 2026-01-02 00:02:38.693613 | orchestrator | + volume_retype_policy = "never" 2026-01-02 00:02:38.693620 | orchestrator | + volume_type = "ssd" 2026-01-02 00:02:38.693628 | orchestrator | } 2026-01-02 00:02:38.693636 | orchestrator | 2026-01-02 00:02:38.693644 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[1] will be created 2026-01-02 00:02:38.693652 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-02 00:02:38.693660 | orchestrator | + attachment = (known after apply) 2026-01-02 00:02:38.693668 | orchestrator | + availability_zone = "nova" 2026-01-02 00:02:38.693676 | orchestrator | + id = (known after apply) 2026-01-02 00:02:38.693684 | orchestrator | + metadata = (known after apply) 2026-01-02 00:02:38.693692 | orchestrator | + name = "testbed-volume-1-node-4" 2026-01-02 00:02:38.693699 | orchestrator | + region = (known after apply) 2026-01-02 00:02:38.693707 | orchestrator | + size = 20 2026-01-02 00:02:38.693715 | orchestrator | + volume_retype_policy = "never" 2026-01-02 00:02:38.693723 | orchestrator | + volume_type = "ssd" 2026-01-02 00:02:38.693731 | orchestrator | } 2026-01-02 00:02:38.693739 | orchestrator | 2026-01-02 00:02:38.693747 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[2] will be created 2026-01-02 00:02:38.693755 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-02 00:02:38.693763 | orchestrator | + attachment = (known after apply) 2026-01-02 00:02:38.693771 | orchestrator | + availability_zone = "nova" 2026-01-02 00:02:38.693778 | orchestrator | + id = (known after apply) 2026-01-02 00:02:38.693786 | orchestrator | + metadata = (known after apply) 2026-01-02 00:02:38.693794 | orchestrator | + name = "testbed-volume-2-node-5" 2026-01-02 00:02:38.693802 | orchestrator | + region = (known after apply) 2026-01-02 00:02:38.693815 | orchestrator | + size = 20 2026-01-02 00:02:38.693823 | orchestrator | + volume_retype_policy = "never" 2026-01-02 00:02:38.693831 | orchestrator | + volume_type = "ssd" 2026-01-02 00:02:38.693839 | orchestrator | } 2026-01-02 00:02:38.693847 | orchestrator | 2026-01-02 00:02:38.693855 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[3] will be created 2026-01-02 00:02:38.693863 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-02 00:02:38.693871 | orchestrator | + attachment = (known after apply) 2026-01-02 00:02:38.693879 | orchestrator | + availability_zone = "nova" 2026-01-02 00:02:38.693886 | orchestrator | + id = (known after apply) 2026-01-02 00:02:38.693894 | orchestrator | + metadata = (known after apply) 2026-01-02 00:02:38.693902 | orchestrator | + name = "testbed-volume-3-node-3" 2026-01-02 00:02:38.693910 | orchestrator | + region = (known after apply) 2026-01-02 00:02:38.693917 | orchestrator | + size = 20 2026-01-02 00:02:38.693925 | orchestrator | + volume_retype_policy = "never" 2026-01-02 00:02:38.693933 | orchestrator | + volume_type = "ssd" 2026-01-02 00:02:38.693941 | orchestrator | } 2026-01-02 00:02:38.693949 | orchestrator | 2026-01-02 00:02:38.693957 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[4] will be created 2026-01-02 00:02:38.693965 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-02 00:02:38.693972 | orchestrator | + attachment = (known after apply) 2026-01-02 00:02:38.693980 | orchestrator | + availability_zone = "nova" 2026-01-02 00:02:38.693988 | orchestrator | + id = (known after apply) 2026-01-02 00:02:38.693996 | orchestrator | + metadata = (known after apply) 2026-01-02 00:02:38.694009 | orchestrator | + name = "testbed-volume-4-node-4" 2026-01-02 00:02:38.694056 | orchestrator | + region = (known after apply) 2026-01-02 00:02:38.694070 | orchestrator | + size = 20 2026-01-02 00:02:38.694078 | orchestrator | + volume_retype_policy = "never" 2026-01-02 00:02:38.694086 | orchestrator | + volume_type = "ssd" 2026-01-02 00:02:38.694094 | orchestrator | } 2026-01-02 00:02:38.694102 | orchestrator | 2026-01-02 00:02:38.694109 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[5] will be created 2026-01-02 00:02:38.694117 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-02 00:02:38.694125 | orchestrator | + attachment = (known after apply) 2026-01-02 00:02:38.694133 | orchestrator | + availability_zone = "nova" 2026-01-02 00:02:38.694141 | orchestrator | + id = (known after apply) 2026-01-02 00:02:38.694149 | orchestrator | + metadata = (known after apply) 2026-01-02 00:02:38.694156 | orchestrator | + name = "testbed-volume-5-node-5" 2026-01-02 00:02:38.694164 | orchestrator | + region = (known after apply) 2026-01-02 00:02:38.694172 | orchestrator | + size = 20 2026-01-02 00:02:38.694180 | orchestrator | + volume_retype_policy = "never" 2026-01-02 00:02:38.694188 | orchestrator | + volume_type = "ssd" 2026-01-02 00:02:38.694195 | orchestrator | } 2026-01-02 00:02:38.694203 | orchestrator | 2026-01-02 00:02:38.694211 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[6] will be created 2026-01-02 00:02:38.694219 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-02 00:02:38.694227 | orchestrator | + attachment = (known after apply) 2026-01-02 00:02:38.694235 | orchestrator | + availability_zone = "nova" 2026-01-02 00:02:38.694242 | orchestrator | + id = (known after apply) 2026-01-02 00:02:38.694250 | orchestrator | + metadata = (known after apply) 2026-01-02 00:02:38.694258 | orchestrator | + name = "testbed-volume-6-node-3" 2026-01-02 00:02:38.694266 | orchestrator | + region = (known after apply) 2026-01-02 00:02:38.694273 | orchestrator | + size = 20 2026-01-02 00:02:38.694281 | orchestrator | + volume_retype_policy = "never" 2026-01-02 00:02:38.694289 | orchestrator | + volume_type = "ssd" 2026-01-02 00:02:38.694297 | orchestrator | } 2026-01-02 00:02:38.694305 | orchestrator | 2026-01-02 00:02:38.694366 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[7] will be created 2026-01-02 00:02:38.694374 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-02 00:02:38.694388 | orchestrator | + attachment = (known after apply) 2026-01-02 00:02:38.694396 | orchestrator | + availability_zone = "nova" 2026-01-02 00:02:38.694404 | orchestrator | + id = (known after apply) 2026-01-02 00:02:38.694412 | orchestrator | + metadata = (known after apply) 2026-01-02 00:02:38.694419 | orchestrator | + name = "testbed-volume-7-node-4" 2026-01-02 00:02:38.694428 | orchestrator | + region = (known after apply) 2026-01-02 00:02:38.694435 | orchestrator | + size = 20 2026-01-02 00:02:38.694443 | orchestrator | + volume_retype_policy = "never" 2026-01-02 00:02:38.694451 | orchestrator | + volume_type = "ssd" 2026-01-02 00:02:38.694459 | orchestrator | } 2026-01-02 00:02:38.694467 | orchestrator | 2026-01-02 00:02:38.694475 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[8] will be created 2026-01-02 00:02:38.694483 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-02 00:02:38.694490 | orchestrator | + attachment = (known after apply) 2026-01-02 00:02:38.694498 | orchestrator | + availability_zone = "nova" 2026-01-02 00:02:38.694506 | orchestrator | + id = (known after apply) 2026-01-02 00:02:38.694514 | orchestrator | + metadata = (known after apply) 2026-01-02 00:02:38.694522 | orchestrator | + name = "testbed-volume-8-node-5" 2026-01-02 00:02:38.694529 | orchestrator | + region = (known after apply) 2026-01-02 00:02:38.694537 | orchestrator | + size = 20 2026-01-02 00:02:38.694545 | orchestrator | + volume_retype_policy = "never" 2026-01-02 00:02:38.694553 | orchestrator | + volume_type = "ssd" 2026-01-02 00:02:38.694561 | orchestrator | } 2026-01-02 00:02:38.694569 | orchestrator | 2026-01-02 00:02:38.694576 | orchestrator | # openstack_compute_instance_v2.manager_server will be created 2026-01-02 00:02:38.694585 | orchestrator | + resource "openstack_compute_instance_v2" "manager_server" { 2026-01-02 00:02:38.694593 | orchestrator | + access_ip_v4 = (known after apply) 2026-01-02 00:02:38.694600 | orchestrator | + access_ip_v6 = (known after apply) 2026-01-02 00:02:38.694608 | orchestrator | + all_metadata = (known after apply) 2026-01-02 00:02:38.694616 | orchestrator | + all_tags = (known after apply) 2026-01-02 00:02:38.694624 | orchestrator | + availability_zone = "nova" 2026-01-02 00:02:38.694632 | orchestrator | + config_drive = true 2026-01-02 00:02:38.694640 | orchestrator | + created = (known after apply) 2026-01-02 00:02:38.694647 | orchestrator | + flavor_id = (known after apply) 2026-01-02 00:02:38.694655 | orchestrator | + flavor_name = "OSISM-4V-16" 2026-01-02 00:02:38.694663 | orchestrator | + force_delete = false 2026-01-02 00:02:38.694671 | orchestrator | + hypervisor_hostname = (known after apply) 2026-01-02 00:02:38.694678 | orchestrator | + id = (known after apply) 2026-01-02 00:02:38.694686 | orchestrator | + image_id = (known after apply) 2026-01-02 00:02:38.694694 | orchestrator | + image_name = (known after apply) 2026-01-02 00:02:38.694702 | orchestrator | + key_pair = "testbed" 2026-01-02 00:02:38.694709 | orchestrator | + name = "testbed-manager" 2026-01-02 00:02:38.694717 | orchestrator | + power_state = "active" 2026-01-02 00:02:38.694725 | orchestrator | + region = (known after apply) 2026-01-02 00:02:38.694733 | orchestrator | + security_groups = (known after apply) 2026-01-02 00:02:38.694741 | orchestrator | + stop_before_destroy = false 2026-01-02 00:02:38.694749 | orchestrator | + updated = (known after apply) 2026-01-02 00:02:38.694756 | orchestrator | + user_data = (sensitive value) 2026-01-02 00:02:38.694764 | orchestrator | 2026-01-02 00:02:38.694773 | orchestrator | + block_device { 2026-01-02 00:02:38.694781 | orchestrator | + boot_index = 0 2026-01-02 00:02:38.694788 | orchestrator | + delete_on_termination = false 2026-01-02 00:02:38.694801 | orchestrator | + destination_type = "volume" 2026-01-02 00:02:38.694809 | orchestrator | + multiattach = false 2026-01-02 00:02:38.694817 | orchestrator | + source_type = "volume" 2026-01-02 00:02:38.694824 | orchestrator | + uuid = (known after apply) 2026-01-02 00:02:38.694841 | orchestrator | } 2026-01-02 00:02:38.694850 | orchestrator | 2026-01-02 00:02:38.694858 | orchestrator | + network { 2026-01-02 00:02:38.694866 | orchestrator | + access_network = false 2026-01-02 00:02:38.694874 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-01-02 00:02:38.694881 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-01-02 00:02:38.694889 | orchestrator | + mac = (known after apply) 2026-01-02 00:02:38.694903 | orchestrator | + name = (known after apply) 2026-01-02 00:02:38.694911 | orchestrator | + port = (known after apply) 2026-01-02 00:02:38.694919 | orchestrator | + uuid = (known after apply) 2026-01-02 00:02:38.694927 | orchestrator | } 2026-01-02 00:02:38.694935 | orchestrator | } 2026-01-02 00:02:38.694942 | orchestrator | 2026-01-02 00:02:38.694950 | orchestrator | # openstack_compute_instance_v2.node_server[0] will be created 2026-01-02 00:02:38.694958 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-01-02 00:02:38.694966 | orchestrator | + access_ip_v4 = (known after apply) 2026-01-02 00:02:38.694974 | orchestrator | + access_ip_v6 = (known after apply) 2026-01-02 00:02:38.694982 | orchestrator | + all_metadata = (known after apply) 2026-01-02 00:02:38.694989 | orchestrator | + all_tags = (known after apply) 2026-01-02 00:02:38.694997 | orchestrator | + availability_zone = "nova" 2026-01-02 00:02:38.695005 | orchestrator | + config_drive = true 2026-01-02 00:02:38.695013 | orchestrator | + created = (known after apply) 2026-01-02 00:02:38.695020 | orchestrator | + flavor_id = (known after apply) 2026-01-02 00:02:38.695028 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-01-02 00:02:38.695036 | orchestrator | + force_delete = false 2026-01-02 00:02:38.695044 | orchestrator | + hypervisor_hostname = (known after apply) 2026-01-02 00:02:38.695052 | orchestrator | + id = (known after apply) 2026-01-02 00:02:38.695060 | orchestrator | + image_id = (known after apply) 2026-01-02 00:02:38.695068 | orchestrator | + image_name = (known after apply) 2026-01-02 00:02:38.695075 | orchestrator | + key_pair = "testbed" 2026-01-02 00:02:38.695083 | orchestrator | + name = "testbed-node-0" 2026-01-02 00:02:38.695091 | orchestrator | + power_state = "active" 2026-01-02 00:02:38.695099 | orchestrator | + region = (known after apply) 2026-01-02 00:02:38.695106 | orchestrator | + security_groups = (known after apply) 2026-01-02 00:02:38.695114 | orchestrator | + stop_before_destroy = false 2026-01-02 00:02:38.695122 | orchestrator | + updated = (known after apply) 2026-01-02 00:02:38.695130 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-01-02 00:02:38.695138 | orchestrator | 2026-01-02 00:02:38.695146 | orchestrator | + block_device { 2026-01-02 00:02:38.695153 | orchestrator | + boot_index = 0 2026-01-02 00:02:38.695161 | orchestrator | + delete_on_termination = false 2026-01-02 00:02:38.695169 | orchestrator | + destination_type = "volume" 2026-01-02 00:02:38.695177 | orchestrator | + multiattach = false 2026-01-02 00:02:38.695185 | orchestrator | + source_type = "volume" 2026-01-02 00:02:38.695192 | orchestrator | + uuid = (known after apply) 2026-01-02 00:02:38.695200 | orchestrator | } 2026-01-02 00:02:38.695208 | orchestrator | 2026-01-02 00:02:38.695216 | orchestrator | + network { 2026-01-02 00:02:38.695224 | orchestrator | + access_network = false 2026-01-02 00:02:38.695232 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-01-02 00:02:38.695240 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-01-02 00:02:38.695247 | orchestrator | + mac = (known after apply) 2026-01-02 00:02:38.695255 | orchestrator | + name = (known after apply) 2026-01-02 00:02:38.695263 | orchestrator | + port = (known after apply) 2026-01-02 00:02:38.695271 | orchestrator | + uuid = (known after apply) 2026-01-02 00:02:38.695278 | orchestrator | } 2026-01-02 00:02:38.695286 | orchestrator | } 2026-01-02 00:02:38.695294 | orchestrator | 2026-01-02 00:02:38.695302 | orchestrator | # openstack_compute_instance_v2.node_server[1] will be created 2026-01-02 00:02:38.695325 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-01-02 00:02:38.695333 | orchestrator | + access_ip_v4 = (known after apply) 2026-01-02 00:02:38.695348 | orchestrator | + access_ip_v6 = (known after apply) 2026-01-02 00:02:38.695356 | orchestrator | + all_metadata = (known after apply) 2026-01-02 00:02:38.695364 | orchestrator | + all_tags = (known after apply) 2026-01-02 00:02:38.695371 | orchestrator | + availability_zone = "nova" 2026-01-02 00:02:38.695379 | orchestrator | + config_drive = true 2026-01-02 00:02:38.695387 | orchestrator | + created = (known after apply) 2026-01-02 00:02:38.695395 | orchestrator | + flavor_id = (known after apply) 2026-01-02 00:02:38.695403 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-01-02 00:02:38.695411 | orchestrator | + force_delete = false 2026-01-02 00:02:38.695418 | orchestrator | + hypervisor_hostname = (known after apply) 2026-01-02 00:02:38.695426 | orchestrator | + id = (known after apply) 2026-01-02 00:02:38.695434 | orchestrator | + image_id = (known after apply) 2026-01-02 00:02:38.695442 | orchestrator | + image_name = (known after apply) 2026-01-02 00:02:38.695450 | orchestrator | + key_pair = "testbed" 2026-01-02 00:02:38.695457 | orchestrator | + name = "testbed-node-1" 2026-01-02 00:02:38.695465 | orchestrator | + power_state = "active" 2026-01-02 00:02:38.695473 | orchestrator | + region = (known after apply) 2026-01-02 00:02:38.695481 | orchestrator | + security_groups = (known after apply) 2026-01-02 00:02:38.695489 | orchestrator | + stop_before_destroy = false 2026-01-02 00:02:38.695497 | orchestrator | + updated = (known after apply) 2026-01-02 00:02:38.695505 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-01-02 00:02:38.695512 | orchestrator | 2026-01-02 00:02:38.695520 | orchestrator | + block_device { 2026-01-02 00:02:38.695528 | orchestrator | + boot_index = 0 2026-01-02 00:02:38.695536 | orchestrator | + delete_on_termination = false 2026-01-02 00:02:38.695544 | orchestrator | + destination_type = "volume" 2026-01-02 00:02:38.695552 | orchestrator | + multiattach = false 2026-01-02 00:02:38.695560 | orchestrator | + source_type = "volume" 2026-01-02 00:02:38.695568 | orchestrator | + uuid = (known after apply) 2026-01-02 00:02:38.695575 | orchestrator | } 2026-01-02 00:02:38.695583 | orchestrator | 2026-01-02 00:02:38.695591 | orchestrator | + network { 2026-01-02 00:02:38.695599 | orchestrator | + access_network = false 2026-01-02 00:02:38.695607 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-01-02 00:02:38.695615 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-01-02 00:02:38.695623 | orchestrator | + mac = (known after apply) 2026-01-02 00:02:38.695630 | orchestrator | + name = (known after apply) 2026-01-02 00:02:38.695638 | orchestrator | + port = (known after apply) 2026-01-02 00:02:38.695646 | orchestrator | + uuid = (known after apply) 2026-01-02 00:02:38.695654 | orchestrator | } 2026-01-02 00:02:38.695662 | orchestrator | } 2026-01-02 00:02:38.695670 | orchestrator | 2026-01-02 00:02:38.695678 | orchestrator | # openstack_compute_instance_v2.node_server[2] will be created 2026-01-02 00:02:38.695686 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-01-02 00:02:38.695694 | orchestrator | + access_ip_v4 = (known after apply) 2026-01-02 00:02:38.695701 | orchestrator | + access_ip_v6 = (known after apply) 2026-01-02 00:02:38.695710 | orchestrator | + all_metadata = (known after apply) 2026-01-02 00:02:38.695722 | orchestrator | + all_tags = (known after apply) 2026-01-02 00:02:38.695735 | orchestrator | + availability_zone = "nova" 2026-01-02 00:02:38.695743 | orchestrator | + config_drive = true 2026-01-02 00:02:38.695751 | orchestrator | + created = (known after apply) 2026-01-02 00:02:38.695759 | orchestrator | + flavor_id = (known after apply) 2026-01-02 00:02:38.695767 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-01-02 00:02:38.695774 | orchestrator | + force_delete = false 2026-01-02 00:02:38.695782 | orchestrator | + hypervisor_hostname = (known after apply) 2026-01-02 00:02:38.695790 | orchestrator | + id = (known after apply) 2026-01-02 00:02:38.695798 | orchestrator | + image_id = (known after apply) 2026-01-02 00:02:38.695811 | orchestrator | + image_name = (known after apply) 2026-01-02 00:02:38.695819 | orchestrator | + key_pair = "testbed" 2026-01-02 00:02:38.695827 | orchestrator | + name = "testbed-node-2" 2026-01-02 00:02:38.695835 | orchestrator | + power_state = "active" 2026-01-02 00:02:38.695843 | orchestrator | + region = (known after apply) 2026-01-02 00:02:38.695850 | orchestrator | + security_groups = (known after apply) 2026-01-02 00:02:38.695858 | orchestrator | + stop_before_destroy = false 2026-01-02 00:02:38.695866 | orchestrator | + updated = (known after apply) 2026-01-02 00:02:38.695874 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-01-02 00:02:38.695882 | orchestrator | 2026-01-02 00:02:38.695890 | orchestrator | + block_device { 2026-01-02 00:02:38.695898 | orchestrator | + boot_index = 0 2026-01-02 00:02:38.695906 | orchestrator | + delete_on_termination = false 2026-01-02 00:02:38.695914 | orchestrator | + destination_type = "volume" 2026-01-02 00:02:38.695922 | orchestrator | + multiattach = false 2026-01-02 00:02:38.695930 | orchestrator | + source_type = "volume" 2026-01-02 00:02:38.695937 | orchestrator | + uuid = (known after apply) 2026-01-02 00:02:38.695945 | orchestrator | } 2026-01-02 00:02:38.695953 | orchestrator | 2026-01-02 00:02:38.695961 | orchestrator | + network { 2026-01-02 00:02:38.695969 | orchestrator | + access_network = false 2026-01-02 00:02:38.695977 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-01-02 00:02:38.695985 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-01-02 00:02:38.695993 | orchestrator | + mac = (known after apply) 2026-01-02 00:02:38.696000 | orchestrator | + name = (known after apply) 2026-01-02 00:02:38.696008 | orchestrator | + port = (known after apply) 2026-01-02 00:02:38.696016 | orchestrator | + uuid = (known after apply) 2026-01-02 00:02:38.696024 | orchestrator | } 2026-01-02 00:02:38.696032 | orchestrator | } 2026-01-02 00:02:38.696040 | orchestrator | 2026-01-02 00:02:38.696048 | orchestrator | # openstack_compute_instance_v2.node_server[3] will be created 2026-01-02 00:02:38.696055 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-01-02 00:02:38.696063 | orchestrator | + access_ip_v4 = (known after apply) 2026-01-02 00:02:38.696071 | orchestrator | + access_ip_v6 = (known after apply) 2026-01-02 00:02:38.696079 | orchestrator | + all_metadata = (known after apply) 2026-01-02 00:02:38.696087 | orchestrator | + all_tags = (known after apply) 2026-01-02 00:02:38.696094 | orchestrator | + availability_zone = "nova" 2026-01-02 00:02:38.696102 | orchestrator | + config_drive = true 2026-01-02 00:02:38.696110 | orchestrator | + created = (known after apply) 2026-01-02 00:02:38.696118 | orchestrator | + flavor_id = (known after apply) 2026-01-02 00:02:38.696126 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-01-02 00:02:38.696133 | orchestrator | + force_delete = false 2026-01-02 00:02:38.696141 | orchestrator | + hypervisor_hostname = (known after apply) 2026-01-02 00:02:38.696149 | orchestrator | + id = (known after apply) 2026-01-02 00:02:38.696157 | orchestrator | + image_id = (known after apply) 2026-01-02 00:02:38.696165 | orchestrator | + image_name = (known after apply) 2026-01-02 00:02:38.696173 | orchestrator | + key_pair = "testbed" 2026-01-02 00:02:38.696181 | orchestrator | + name = "testbed-node-3" 2026-01-02 00:02:38.696188 | orchestrator | + power_state = "active" 2026-01-02 00:02:38.696196 | orchestrator | + region = (known after apply) 2026-01-02 00:02:38.696204 | orchestrator | + security_groups = (known after apply) 2026-01-02 00:02:38.696212 | orchestrator | + stop_before_destroy = false 2026-01-02 00:02:38.696219 | orchestrator | + updated = (known after apply) 2026-01-02 00:02:38.696227 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-01-02 00:02:38.696235 | orchestrator | 2026-01-02 00:02:38.696243 | orchestrator | + block_device { 2026-01-02 00:02:38.696255 | orchestrator | + boot_index = 0 2026-01-02 00:02:38.696263 | orchestrator | + delete_on_termination = false 2026-01-02 00:02:38.696271 | orchestrator | + destination_type = "volume" 2026-01-02 00:02:38.696284 | orchestrator | + multiattach = false 2026-01-02 00:02:38.696292 | orchestrator | + source_type = "volume" 2026-01-02 00:02:38.696300 | orchestrator | + uuid = (known after apply) 2026-01-02 00:02:38.696308 | orchestrator | } 2026-01-02 00:02:38.696338 | orchestrator | 2026-01-02 00:02:38.696346 | orchestrator | + network { 2026-01-02 00:02:38.696354 | orchestrator | + access_network = false 2026-01-02 00:02:38.696362 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-01-02 00:02:38.696370 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-01-02 00:02:38.696378 | orchestrator | + mac = (known after apply) 2026-01-02 00:02:38.696385 | orchestrator | + name = (known after apply) 2026-01-02 00:02:38.696393 | orchestrator | + port = (known after apply) 2026-01-02 00:02:38.696401 | orchestrator | + uuid = (known after apply) 2026-01-02 00:02:38.696409 | orchestrator | } 2026-01-02 00:02:38.696417 | orchestrator | } 2026-01-02 00:02:38.696425 | orchestrator | 2026-01-02 00:02:38.696433 | orchestrator | # openstack_compute_instance_v2.node_server[4] will be created 2026-01-02 00:02:38.696441 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-01-02 00:02:38.696449 | orchestrator | + access_ip_v4 = (known after apply) 2026-01-02 00:02:38.696456 | orchestrator | + access_ip_v6 = (known after apply) 2026-01-02 00:02:38.696464 | orchestrator | + all_metadata = (known after apply) 2026-01-02 00:02:38.696472 | orchestrator | + all_tags = (known after apply) 2026-01-02 00:02:38.696480 | orchestrator | + availability_zone = "nova" 2026-01-02 00:02:38.696488 | orchestrator | + config_drive = true 2026-01-02 00:02:38.696496 | orchestrator | + created = (known after apply) 2026-01-02 00:02:38.696504 | orchestrator | + flavor_id = (known after apply) 2026-01-02 00:02:38.696511 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-01-02 00:02:38.696519 | orchestrator | + force_delete = false 2026-01-02 00:02:38.696527 | orchestrator | + hypervisor_hostname = (known after apply) 2026-01-02 00:02:38.696535 | orchestrator | + id = (known after apply) 2026-01-02 00:02:38.696543 | orchestrator | + image_id = (known after apply) 2026-01-02 00:02:38.696560 | orchestrator | + image_name = (known after apply) 2026-01-02 00:02:38.696568 | orchestrator | + key_pair = "testbed" 2026-01-02 00:02:38.696576 | orchestrator | + name = "testbed-node-4" 2026-01-02 00:02:38.696583 | orchestrator | + power_state = "active" 2026-01-02 00:02:38.696591 | orchestrator | + region = (known after apply) 2026-01-02 00:02:38.696599 | orchestrator | + security_groups = (known after apply) 2026-01-02 00:02:38.696607 | orchestrator | + stop_before_destroy = false 2026-01-02 00:02:38.696614 | orchestrator | + updated = (known after apply) 2026-01-02 00:02:38.696622 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-01-02 00:02:38.696630 | orchestrator | 2026-01-02 00:02:38.696638 | orchestrator | + block_device { 2026-01-02 00:02:38.696646 | orchestrator | + boot_index = 0 2026-01-02 00:02:38.696653 | orchestrator | + delete_on_termination = false 2026-01-02 00:02:38.696661 | orchestrator | + destination_type = "volume" 2026-01-02 00:02:38.696669 | orchestrator | + multiattach = false 2026-01-02 00:02:38.696677 | orchestrator | + source_type = "volume" 2026-01-02 00:02:38.696684 | orchestrator | + uuid = (known after apply) 2026-01-02 00:02:38.696692 | orchestrator | } 2026-01-02 00:02:38.696700 | orchestrator | 2026-01-02 00:02:38.696708 | orchestrator | + network { 2026-01-02 00:02:38.696716 | orchestrator | + access_network = false 2026-01-02 00:02:38.696723 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-01-02 00:02:38.696731 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-01-02 00:02:38.696739 | orchestrator | + mac = (known after apply) 2026-01-02 00:02:38.696747 | orchestrator | + name = (known after apply) 2026-01-02 00:02:38.696754 | orchestrator | + port = (known after apply) 2026-01-02 00:02:38.696762 | orchestrator | + uuid = (known after apply) 2026-01-02 00:02:38.696770 | orchestrator | } 2026-01-02 00:02:38.696778 | orchestrator | } 2026-01-02 00:02:38.696796 | orchestrator | 2026-01-02 00:02:38.696804 | orchestrator | # openstack_compute_instance_v2.node_server[5] will be created 2026-01-02 00:02:38.696812 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-01-02 00:02:38.696820 | orchestrator | + access_ip_v4 = (known after apply) 2026-01-02 00:02:38.696828 | orchestrator | + access_ip_v6 = (known after apply) 2026-01-02 00:02:38.696835 | orchestrator | + all_metadata = (known after apply) 2026-01-02 00:02:38.696843 | orchestrator | + all_tags = (known after apply) 2026-01-02 00:02:38.696851 | orchestrator | + availability_zone = "nova" 2026-01-02 00:02:38.696859 | orchestrator | + config_drive = true 2026-01-02 00:02:38.696866 | orchestrator | + created = (known after apply) 2026-01-02 00:02:38.696874 | orchestrator | + flavor_id = (known after apply) 2026-01-02 00:02:38.696882 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-01-02 00:02:38.696890 | orchestrator | + force_delete = false 2026-01-02 00:02:38.696902 | orchestrator | + hypervisor_hostname = (known after apply) 2026-01-02 00:02:38.696910 | orchestrator | + id = (known after apply) 2026-01-02 00:02:38.696918 | orchestrator | + image_id = (known after apply) 2026-01-02 00:02:38.696925 | orchestrator | + image_name = (known after apply) 2026-01-02 00:02:38.696933 | orchestrator | + key_pair = "testbed" 2026-01-02 00:02:38.696941 | orchestrator | + name = "testbed-node-5" 2026-01-02 00:02:38.696949 | orchestrator | + power_state = "active" 2026-01-02 00:02:38.696956 | orchestrator | + region = (known after apply) 2026-01-02 00:02:38.696964 | orchestrator | + security_groups = (known after apply) 2026-01-02 00:02:38.696972 | orchestrator | + stop_before_destroy = false 2026-01-02 00:02:38.696980 | orchestrator | + updated = (known after apply) 2026-01-02 00:02:38.696987 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-01-02 00:02:38.696995 | orchestrator | 2026-01-02 00:02:38.697003 | orchestrator | + block_device { 2026-01-02 00:02:38.697011 | orchestrator | + boot_index = 0 2026-01-02 00:02:38.697018 | orchestrator | + delete_on_termination = false 2026-01-02 00:02:38.697026 | orchestrator | + destination_type = "volume" 2026-01-02 00:02:38.697034 | orchestrator | + multiattach = false 2026-01-02 00:02:38.697041 | orchestrator | + source_type = "volume" 2026-01-02 00:02:38.697049 | orchestrator | + uuid = (known after apply) 2026-01-02 00:02:38.697057 | orchestrator | } 2026-01-02 00:02:38.697065 | orchestrator | 2026-01-02 00:02:38.697073 | orchestrator | + network { 2026-01-02 00:02:38.697080 | orchestrator | + access_network = false 2026-01-02 00:02:38.697088 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-01-02 00:02:38.697096 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-01-02 00:02:38.697103 | orchestrator | + mac = (known after apply) 2026-01-02 00:02:38.697111 | orchestrator | + name = (known after apply) 2026-01-02 00:02:38.697119 | orchestrator | + port = (known after apply) 2026-01-02 00:02:38.697127 | orchestrator | + uuid = (known after apply) 2026-01-02 00:02:38.697135 | orchestrator | } 2026-01-02 00:02:38.697143 | orchestrator | } 2026-01-02 00:02:38.697150 | orchestrator | 2026-01-02 00:02:38.697158 | orchestrator | # openstack_compute_keypair_v2.key will be created 2026-01-02 00:02:38.697166 | orchestrator | + resource "openstack_compute_keypair_v2" "key" { 2026-01-02 00:02:38.697174 | orchestrator | + fingerprint = (known after apply) 2026-01-02 00:02:38.697182 | orchestrator | + id = (known after apply) 2026-01-02 00:02:38.697190 | orchestrator | + name = "testbed" 2026-01-02 00:02:38.697197 | orchestrator | + private_key = (sensitive value) 2026-01-02 00:02:38.697205 | orchestrator | + public_key = (known after apply) 2026-01-02 00:02:38.697213 | orchestrator | + region = (known after apply) 2026-01-02 00:02:38.697221 | orchestrator | + user_id = (known after apply) 2026-01-02 00:02:38.697228 | orchestrator | } 2026-01-02 00:02:38.697236 | orchestrator | 2026-01-02 00:02:38.697244 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2026-01-02 00:02:38.697252 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-02 00:02:38.697267 | orchestrator | + device = (known after apply) 2026-01-02 00:02:38.697275 | orchestrator | + id = (known after apply) 2026-01-02 00:02:38.697283 | orchestrator | + instance_id = (known after apply) 2026-01-02 00:02:38.697290 | orchestrator | + region = (known after apply) 2026-01-02 00:02:38.697298 | orchestrator | + volume_id = (known after apply) 2026-01-02 00:02:38.697306 | orchestrator | } 2026-01-02 00:02:38.697328 | orchestrator | 2026-01-02 00:02:38.697336 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2026-01-02 00:02:38.697344 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-02 00:02:38.697352 | orchestrator | + device = (known after apply) 2026-01-02 00:02:38.697360 | orchestrator | + id = (known after apply) 2026-01-02 00:02:38.697368 | orchestrator | + instance_id = (known after apply) 2026-01-02 00:02:38.697375 | orchestrator | + region = (known after apply) 2026-01-02 00:02:38.697383 | orchestrator | + volume_id = (known after apply) 2026-01-02 00:02:38.697391 | orchestrator | } 2026-01-02 00:02:38.697399 | orchestrator | 2026-01-02 00:02:38.697412 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2026-01-02 00:02:38.697420 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-02 00:02:38.697428 | orchestrator | + device = (known after apply) 2026-01-02 00:02:38.697436 | orchestrator | + id = (known after apply) 2026-01-02 00:02:38.697444 | orchestrator | + instance_id = (known after apply) 2026-01-02 00:02:38.697452 | orchestrator | + region = (known after apply) 2026-01-02 00:02:38.697460 | orchestrator | + volume_id = (known after apply) 2026-01-02 00:02:38.697467 | orchestrator | } 2026-01-02 00:02:38.697476 | orchestrator | 2026-01-02 00:02:38.697484 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2026-01-02 00:02:38.697492 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-02 00:02:38.697499 | orchestrator | + device = (known after apply) 2026-01-02 00:02:38.697507 | orchestrator | + id = (known after apply) 2026-01-02 00:02:38.697515 | orchestrator | + instance_id = (known after apply) 2026-01-02 00:02:38.697523 | orchestrator | + region = (known after apply) 2026-01-02 00:02:38.697531 | orchestrator | + volume_id = (known after apply) 2026-01-02 00:02:38.697539 | orchestrator | } 2026-01-02 00:02:38.697547 | orchestrator | 2026-01-02 00:02:38.697555 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2026-01-02 00:02:38.697563 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-02 00:02:38.697572 | orchestrator | + device = (known after apply) 2026-01-02 00:02:38.697580 | orchestrator | + id = (known after apply) 2026-01-02 00:02:38.697587 | orchestrator | + instance_id = (known after apply) 2026-01-02 00:02:38.697599 | orchestrator | + region = (known after apply) 2026-01-02 00:02:38.697608 | orchestrator | + volume_id = (known after apply) 2026-01-02 00:02:38.697615 | orchestrator | } 2026-01-02 00:02:38.697623 | orchestrator | 2026-01-02 00:02:38.697631 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2026-01-02 00:02:38.697639 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-02 00:02:38.697647 | orchestrator | + device = (known after apply) 2026-01-02 00:02:38.697655 | orchestrator | + id = (known after apply) 2026-01-02 00:02:38.697663 | orchestrator | + instance_id = (known after apply) 2026-01-02 00:02:38.697671 | orchestrator | + region = (known after apply) 2026-01-02 00:02:38.697679 | orchestrator | + volume_id = (known after apply) 2026-01-02 00:02:38.697687 | orchestrator | } 2026-01-02 00:02:38.697695 | orchestrator | 2026-01-02 00:02:38.697703 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2026-01-02 00:02:38.697711 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-02 00:02:38.697719 | orchestrator | + device = (known after apply) 2026-01-02 00:02:38.697727 | orchestrator | + id = (known after apply) 2026-01-02 00:02:38.697734 | orchestrator | + instance_id = (known after apply) 2026-01-02 00:02:38.697742 | orchestrator | + region = (known after apply) 2026-01-02 00:02:38.697755 | orchestrator | + volume_id = (known after apply) 2026-01-02 00:02:38.697763 | orchestrator | } 2026-01-02 00:02:38.697771 | orchestrator | 2026-01-02 00:02:38.697780 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2026-01-02 00:02:38.697787 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-02 00:02:38.697795 | orchestrator | + device = (known after apply) 2026-01-02 00:02:38.697803 | orchestrator | + id = (known after apply) 2026-01-02 00:02:38.697811 | orchestrator | + instance_id = (known after apply) 2026-01-02 00:02:38.697819 | orchestrator | + region = (known after apply) 2026-01-02 00:02:38.697827 | orchestrator | + volume_id = (known after apply) 2026-01-02 00:02:38.697835 | orchestrator | } 2026-01-02 00:02:38.697843 | orchestrator | 2026-01-02 00:02:38.697851 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2026-01-02 00:02:38.697859 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-02 00:02:38.697867 | orchestrator | + device = (known after apply) 2026-01-02 00:02:38.697875 | orchestrator | + id = (known after apply) 2026-01-02 00:02:38.697883 | orchestrator | + instance_id = (known after apply) 2026-01-02 00:02:38.697891 | orchestrator | + region = (known after apply) 2026-01-02 00:02:38.697898 | orchestrator | + volume_id = (known after apply) 2026-01-02 00:02:38.697906 | orchestrator | } 2026-01-02 00:02:38.697914 | orchestrator | 2026-01-02 00:02:38.697922 | orchestrator | # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2026-01-02 00:02:38.697931 | orchestrator | + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2026-01-02 00:02:38.697939 | orchestrator | + fixed_ip = (known after apply) 2026-01-02 00:02:38.697947 | orchestrator | + floating_ip = (known after apply) 2026-01-02 00:02:38.697955 | orchestrator | + id = (known after apply) 2026-01-02 00:02:38.697963 | orchestrator | + port_id = (known after apply) 2026-01-02 00:02:38.697970 | orchestrator | + region = (known after apply) 2026-01-02 00:02:38.697978 | orchestrator | } 2026-01-02 00:02:38.697986 | orchestrator | 2026-01-02 00:02:38.697995 | orchestrator | # openstack_networking_floatingip_v2.manager_floating_ip will be created 2026-01-02 00:02:38.698003 | orchestrator | + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2026-01-02 00:02:38.698010 | orchestrator | + address = (known after apply) 2026-01-02 00:02:38.698039 | orchestrator | + all_tags = (known after apply) 2026-01-02 00:02:38.698047 | orchestrator | + dns_domain = (known after apply) 2026-01-02 00:02:38.698055 | orchestrator | + dns_name = (known after apply) 2026-01-02 00:02:38.698063 | orchestrator | + fixed_ip = (known after apply) 2026-01-02 00:02:38.698071 | orchestrator | + id = (known after apply) 2026-01-02 00:02:38.698079 | orchestrator | + pool = "public" 2026-01-02 00:02:38.698087 | orchestrator | + port_id = (known after apply) 2026-01-02 00:02:38.698094 | orchestrator | + region = (known after apply) 2026-01-02 00:02:38.698102 | orchestrator | + subnet_id = (known after apply) 2026-01-02 00:02:38.698110 | orchestrator | + tenant_id = (known after apply) 2026-01-02 00:02:38.698118 | orchestrator | } 2026-01-02 00:02:38.698125 | orchestrator | 2026-01-02 00:02:38.698133 | orchestrator | # openstack_networking_network_v2.net_management will be created 2026-01-02 00:02:38.698141 | orchestrator | + resource "openstack_networking_network_v2" "net_management" { 2026-01-02 00:02:38.698149 | orchestrator | + admin_state_up = (known after apply) 2026-01-02 00:02:38.698157 | orchestrator | + all_tags = (known after apply) 2026-01-02 00:02:38.698164 | orchestrator | + availability_zone_hints = [ 2026-01-02 00:02:38.698172 | orchestrator | + "nova", 2026-01-02 00:02:38.698180 | orchestrator | ] 2026-01-02 00:02:38.698188 | orchestrator | + dns_domain = (known after apply) 2026-01-02 00:02:38.698196 | orchestrator | + external = (known after apply) 2026-01-02 00:02:38.698204 | orchestrator | + id = (known after apply) 2026-01-02 00:02:38.698217 | orchestrator | + mtu = (known after apply) 2026-01-02 00:02:38.698225 | orchestrator | + name = "net-testbed-management" 2026-01-02 00:02:38.698233 | orchestrator | + port_security_enabled = (known after apply) 2026-01-02 00:02:38.698246 | orchestrator | + qos_policy_id = (known after apply) 2026-01-02 00:02:38.698254 | orchestrator | + region = (known after apply) 2026-01-02 00:02:38.698262 | orchestrator | + shared = (known after apply) 2026-01-02 00:02:38.698270 | orchestrator | + tenant_id = (known after apply) 2026-01-02 00:02:38.698278 | orchestrator | + transparent_vlan = (known after apply) 2026-01-02 00:02:38.698286 | orchestrator | 2026-01-02 00:02:38.698294 | orchestrator | + segments (known after apply) 2026-01-02 00:02:38.698301 | orchestrator | } 2026-01-02 00:02:38.698309 | orchestrator | 2026-01-02 00:02:38.698380 | orchestrator | # openstack_networking_port_v2.manager_port_management will be created 2026-01-02 00:02:38.698389 | orchestrator | + resource "openstack_networking_port_v2" "manager_port_management" { 2026-01-02 00:02:38.698397 | orchestrator | + admin_state_up = (known after apply) 2026-01-02 00:02:38.698405 | orchestrator | + all_fixed_ips = (known after apply) 2026-01-02 00:02:38.698412 | orchestrator | + all_security_group_ids = (known after apply) 2026-01-02 00:02:38.698425 | orchestrator | + all_tags = (known after apply) 2026-01-02 00:02:38.698433 | orchestrator | + device_id = (known after apply) 2026-01-02 00:02:38.698441 | orchestrator | + device_owner = (known after apply) 2026-01-02 00:02:38.698448 | orchestrator | + dns_assignment = (known after apply) 2026-01-02 00:02:38.698456 | orchestrator | + dns_name = (known after apply) 2026-01-02 00:02:38.698464 | orchestrator | + id = (known after apply) 2026-01-02 00:02:38.698472 | orchestrator | + mac_address = (known after apply) 2026-01-02 00:02:38.698479 | orchestrator | + network_id = (known after apply) 2026-01-02 00:02:38.698487 | orchestrator | + port_security_enabled = (known after apply) 2026-01-02 00:02:38.698495 | orchestrator | + qos_policy_id = (known after apply) 2026-01-02 00:02:38.698503 | orchestrator | + region = (known after apply) 2026-01-02 00:02:38.698510 | orchestrator | + security_group_ids = (known after apply) 2026-01-02 00:02:38.698518 | orchestrator | + tenant_id = (known after apply) 2026-01-02 00:02:38.698526 | orchestrator | 2026-01-02 00:02:38.698533 | orchestrator | + allowed_address_pairs { 2026-01-02 00:02:38.698540 | orchestrator | + ip_address = "192.168.16.8/32" 2026-01-02 00:02:38.698547 | orchestrator | } 2026-01-02 00:02:38.698553 | orchestrator | 2026-01-02 00:02:38.698560 | orchestrator | + binding (known after apply) 2026-01-02 00:02:38.698567 | orchestrator | 2026-01-02 00:02:38.698574 | orchestrator | + fixed_ip { 2026-01-02 00:02:38.698580 | orchestrator | + ip_address = "192.168.16.5" 2026-01-02 00:02:38.698587 | orchestrator | + subnet_id = (known after apply) 2026-01-02 00:02:38.698594 | orchestrator | } 2026-01-02 00:02:38.698600 | orchestrator | } 2026-01-02 00:02:38.698607 | orchestrator | 2026-01-02 00:02:38.698613 | orchestrator | # openstack_networking_port_v2.node_port_management[0] will be created 2026-01-02 00:02:38.698620 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-01-02 00:02:38.698627 | orchestrator | + admin_state_up = (known after apply) 2026-01-02 00:02:38.698634 | orchestrator | + all_fixed_ips = (known after apply) 2026-01-02 00:02:38.698640 | orchestrator | + all_security_group_ids = (known after apply) 2026-01-02 00:02:38.698647 | orchestrator | + all_tags = (known after apply) 2026-01-02 00:02:38.698653 | orchestrator | + device_id = (known after apply) 2026-01-02 00:02:38.698660 | orchestrator | + device_owner = (known after apply) 2026-01-02 00:02:38.698666 | orchestrator | + dns_assignment = (known after apply) 2026-01-02 00:02:38.698673 | orchestrator | + dns_name = (known after apply) 2026-01-02 00:02:38.698679 | orchestrator | + id = (known after apply) 2026-01-02 00:02:38.698686 | orchestrator | + mac_address = (known after apply) 2026-01-02 00:02:38.698693 | orchestrator | + network_id = (known after apply) 2026-01-02 00:02:38.698699 | orchestrator | + port_security_enabled = (known after apply) 2026-01-02 00:02:38.698706 | orchestrator | + qos_policy_id = (known after apply) 2026-01-02 00:02:38.698712 | orchestrator | + region = (known after apply) 2026-01-02 00:02:38.698723 | orchestrator | + security_group_ids = (known after apply) 2026-01-02 00:02:38.698730 | orchestrator | + tenant_id = (known after apply) 2026-01-02 00:02:38.698736 | orchestrator | 2026-01-02 00:02:38.698743 | orchestrator | + allowed_address_pairs { 2026-01-02 00:02:38.698750 | orchestrator | + ip_address = "192.168.16.254/32" 2026-01-02 00:02:38.698756 | orchestrator | } 2026-01-02 00:02:38.698763 | orchestrator | + allowed_address_pairs { 2026-01-02 00:02:38.698770 | orchestrator | + ip_address = "192.168.16.8/32" 2026-01-02 00:02:38.698776 | orchestrator | } 2026-01-02 00:02:38.698783 | orchestrator | + allowed_address_pairs { 2026-01-02 00:02:38.698789 | orchestrator | + ip_address = "192.168.16.9/32" 2026-01-02 00:02:38.698796 | orchestrator | } 2026-01-02 00:02:38.698803 | orchestrator | 2026-01-02 00:02:38.698809 | orchestrator | + binding (known after apply) 2026-01-02 00:02:38.698816 | orchestrator | 2026-01-02 00:02:38.698822 | orchestrator | + fixed_ip { 2026-01-02 00:02:38.698829 | orchestrator | + ip_address = "192.168.16.10" 2026-01-02 00:02:38.698835 | orchestrator | + subnet_id = (known after apply) 2026-01-02 00:02:38.698842 | orchestrator | } 2026-01-02 00:02:38.698848 | orchestrator | } 2026-01-02 00:02:38.698855 | orchestrator | 2026-01-02 00:02:38.698862 | orchestrator | # openstack_networking_port_v2.node_port_management[1] will be created 2026-01-02 00:02:38.698868 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-01-02 00:02:38.698875 | orchestrator | + admin_state_up = (known after apply) 2026-01-02 00:02:38.698882 | orchestrator | + all_fixed_ips = (known after apply) 2026-01-02 00:02:38.698888 | orchestrator | + all_security_group_ids = (known after apply) 2026-01-02 00:02:38.698895 | orchestrator | + all_tags = (known after apply) 2026-01-02 00:02:38.698901 | orchestrator | + device_id = (known after apply) 2026-01-02 00:02:38.698908 | orchestrator | + device_owner = (known after apply) 2026-01-02 00:02:38.698915 | orchestrator | + dns_assignment = (known after apply) 2026-01-02 00:02:38.698921 | orchestrator | + dns_name = (known after apply) 2026-01-02 00:02:38.698928 | orchestrator | + id = (known after apply) 2026-01-02 00:02:38.698934 | orchestrator | + mac_address = (known after apply) 2026-01-02 00:02:38.698941 | orchestrator | + network_id = (known after apply) 2026-01-02 00:02:38.698948 | orchestrator | + port_security_enabled = (known after apply) 2026-01-02 00:02:38.698954 | orchestrator | + qos_policy_id = (known after apply) 2026-01-02 00:02:38.698961 | orchestrator | + region = (known after apply) 2026-01-02 00:02:38.698967 | orchestrator | + security_group_ids = (known after apply) 2026-01-02 00:02:38.698974 | orchestrator | + tenant_id = (known after apply) 2026-01-02 00:02:38.698980 | orchestrator | 2026-01-02 00:02:38.698993 | orchestrator | + allowed_address_pairs { 2026-01-02 00:02:38.699000 | orchestrator | + ip_address = "192.168.16.254/32" 2026-01-02 00:02:38.699007 | orchestrator | } 2026-01-02 00:02:38.699013 | orchestrator | + allowed_address_pairs { 2026-01-02 00:02:38.699020 | orchestrator | + ip_address = "192.168.16.8/32" 2026-01-02 00:02:38.699026 | orchestrator | } 2026-01-02 00:02:38.699033 | orchestrator | + allowed_address_pairs { 2026-01-02 00:02:38.699039 | orchestrator | + ip_address = "192.168.16.9/32" 2026-01-02 00:02:38.699046 | orchestrator | } 2026-01-02 00:02:38.699053 | orchestrator | 2026-01-02 00:02:38.699059 | orchestrator | + binding (known after apply) 2026-01-02 00:02:38.699066 | orchestrator | 2026-01-02 00:02:38.699073 | orchestrator | + fixed_ip { 2026-01-02 00:02:38.699079 | orchestrator | + ip_address = "192.168.16.11" 2026-01-02 00:02:38.699086 | orchestrator | + subnet_id = (known after apply) 2026-01-02 00:02:38.699092 | orchestrator | } 2026-01-02 00:02:38.699099 | orchestrator | } 2026-01-02 00:02:38.699105 | orchestrator | 2026-01-02 00:02:38.699112 | orchestrator | # openstack_networking_port_v2.node_port_management[2] will be created 2026-01-02 00:02:38.699119 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-01-02 00:02:38.699125 | orchestrator | + admin_state_up = (known after apply) 2026-01-02 00:02:38.699132 | orchestrator | + all_fixed_ips = (known after apply) 2026-01-02 00:02:38.699139 | orchestrator | + all_security_group_ids = (known after apply) 2026-01-02 00:02:38.699145 | orchestrator | + all_tags = (known after apply) 2026-01-02 00:02:38.699156 | orchestrator | + device_id = (known after apply) 2026-01-02 00:02:38.699163 | orchestrator | + device_owner = (known after apply) 2026-01-02 00:02:38.699170 | orchestrator | + dns_assignment = (known after apply) 2026-01-02 00:02:38.699176 | orchestrator | + dns_name = (known after apply) 2026-01-02 00:02:38.699186 | orchestrator | + id = (known after apply) 2026-01-02 00:02:38.699193 | orchestrator | + mac_address = (known after apply) 2026-01-02 00:02:38.699199 | orchestrator | + network_id = (known after apply) 2026-01-02 00:02:38.699206 | orchestrator | + port_security_enabled = (known after apply) 2026-01-02 00:02:38.699213 | orchestrator | + qos_policy_id = (known after apply) 2026-01-02 00:02:38.699219 | orchestrator | + region = (known after apply) 2026-01-02 00:02:38.699226 | orchestrator | + security_group_ids = (known after apply) 2026-01-02 00:02:38.699232 | orchestrator | + tenant_id = (known after apply) 2026-01-02 00:02:38.699239 | orchestrator | 2026-01-02 00:02:38.699246 | orchestrator | + allowed_address_pairs { 2026-01-02 00:02:38.699252 | orchestrator | + ip_address = "192.168.16.254/32" 2026-01-02 00:02:38.699259 | orchestrator | } 2026-01-02 00:02:38.699265 | orchestrator | + allowed_address_pairs { 2026-01-02 00:02:38.699272 | orchestrator | + ip_address = "192.168.16.8/32" 2026-01-02 00:02:38.699278 | orchestrator | } 2026-01-02 00:02:38.699285 | orchestrator | + allowed_address_pairs { 2026-01-02 00:02:38.699292 | orchestrator | + ip_address = "192.168.16.9/32" 2026-01-02 00:02:38.699298 | orchestrator | } 2026-01-02 00:02:38.699305 | orchestrator | 2026-01-02 00:02:38.699341 | orchestrator | + binding (known after apply) 2026-01-02 00:02:38.699348 | orchestrator | 2026-01-02 00:02:38.699355 | orchestrator | + fixed_ip { 2026-01-02 00:02:38.699362 | orchestrator | + ip_address = "192.168.16.12" 2026-01-02 00:02:38.699369 | orchestrator | + subnet_id = (known after apply) 2026-01-02 00:02:38.699375 | orchestrator | } 2026-01-02 00:02:38.699382 | orchestrator | } 2026-01-02 00:02:38.699388 | orchestrator | 2026-01-02 00:02:38.699395 | orchestrator | # openstack_networking_port_v2.node_port_management[3] will be created 2026-01-02 00:02:38.699402 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-01-02 00:02:38.699408 | orchestrator | + admin_state_up = (known after apply) 2026-01-02 00:02:38.699415 | orchestrator | + all_fixed_ips = (known after apply) 2026-01-02 00:02:38.699422 | orchestrator | + all_security_group_ids = (known after apply) 2026-01-02 00:02:38.699428 | orchestrator | + all_tags = (known after apply) 2026-01-02 00:02:38.699435 | orchestrator | + device_id = (known after apply) 2026-01-02 00:02:38.699442 | orchestrator | + device_owner = (known after apply) 2026-01-02 00:02:38.699448 | orchestrator | + dns_assignment = (known after apply) 2026-01-02 00:02:38.699455 | orchestrator | + dns_name = (known after apply) 2026-01-02 00:02:38.699461 | orchestrator | + id = (known after apply) 2026-01-02 00:02:38.699468 | orchestrator | + mac_address = (known after apply) 2026-01-02 00:02:38.699474 | orchestrator | + network_id = (known after apply) 2026-01-02 00:02:38.699481 | orchestrator | + port_security_enabled = (known after apply) 2026-01-02 00:02:38.699488 | orchestrator | + qos_policy_id = (known after apply) 2026-01-02 00:02:38.699494 | orchestrator | + region = (known after apply) 2026-01-02 00:02:38.699501 | orchestrator | + security_group_ids = (known after apply) 2026-01-02 00:02:38.699507 | orchestrator | + tenant_id = (known after apply) 2026-01-02 00:02:38.699514 | orchestrator | 2026-01-02 00:02:38.699520 | orchestrator | + allowed_address_pairs { 2026-01-02 00:02:38.699527 | orchestrator | + ip_address = "192.168.16.254/32" 2026-01-02 00:02:38.699534 | orchestrator | } 2026-01-02 00:02:38.699540 | orchestrator | + allowed_address_pairs { 2026-01-02 00:02:38.699547 | orchestrator | + ip_address = "192.168.16.8/32" 2026-01-02 00:02:38.699553 | orchestrator | } 2026-01-02 00:02:38.699560 | orchestrator | + allowed_address_pairs { 2026-01-02 00:02:38.699567 | orchestrator | + ip_address = "192.168.16.9/32" 2026-01-02 00:02:38.699573 | orchestrator | } 2026-01-02 00:02:38.699580 | orchestrator | 2026-01-02 00:02:38.699591 | orchestrator | + binding (known after apply) 2026-01-02 00:02:38.699597 | orchestrator | 2026-01-02 00:02:38.699604 | orchestrator | + fixed_ip { 2026-01-02 00:02:38.699610 | orchestrator | + ip_address = "192.168.16.13" 2026-01-02 00:02:38.699617 | orchestrator | + subnet_id = (known after apply) 2026-01-02 00:02:38.699624 | orchestrator | } 2026-01-02 00:02:38.699630 | orchestrator | } 2026-01-02 00:02:38.699637 | orchestrator | 2026-01-02 00:02:38.699643 | orchestrator | # openstack_networking_port_v2.node_port_management[4] will be created 2026-01-02 00:02:38.699650 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-01-02 00:02:38.699657 | orchestrator | + admin_state_up = (known after apply) 2026-01-02 00:02:38.699663 | orchestrator | + all_fixed_ips = (known after apply) 2026-01-02 00:02:38.699670 | orchestrator | + all_security_group_ids = (known after apply) 2026-01-02 00:02:38.699676 | orchestrator | + all_tags = (known after apply) 2026-01-02 00:02:38.699683 | orchestrator | + device_id = (known after apply) 2026-01-02 00:02:38.699690 | orchestrator | + device_owner = (known after apply) 2026-01-02 00:02:38.699696 | orchestrator | + dns_assignment = (known after apply) 2026-01-02 00:02:38.699703 | orchestrator | + dns_name = (known after apply) 2026-01-02 00:02:38.699709 | orchestrator | + id = (known after apply) 2026-01-02 00:02:38.699716 | orchestrator | + mac_address = (known after apply) 2026-01-02 00:02:38.699722 | orchestrator | + network_id = (known after apply) 2026-01-02 00:02:38.699729 | orchestrator | + port_security_enabled = (known after apply) 2026-01-02 00:02:38.699735 | orchestrator | + qos_policy_id = (known after apply) 2026-01-02 00:02:38.699747 | orchestrator | + region = (known after apply) 2026-01-02 00:02:38.699754 | orchestrator | + security_group_ids = (known after apply) 2026-01-02 00:02:38.699760 | orchestrator | + tenant_id = (known after apply) 2026-01-02 00:02:38.699767 | orchestrator | 2026-01-02 00:02:38.699774 | orchestrator | + allowed_address_pairs { 2026-01-02 00:02:38.699781 | orchestrator | + ip_address = "192.168.16.254/32" 2026-01-02 00:02:38.699788 | orchestrator | } 2026-01-02 00:02:38.699794 | orchestrator | + allowed_address_pairs { 2026-01-02 00:02:38.699801 | orchestrator | + ip_address = "192.168.16.8/32" 2026-01-02 00:02:38.699808 | orchestrator | } 2026-01-02 00:02:38.699814 | orchestrator | + allowed_address_pairs { 2026-01-02 00:02:38.699821 | orchestrator | + ip_address = "192.168.16.9/32" 2026-01-02 00:02:38.699827 | orchestrator | } 2026-01-02 00:02:38.699833 | orchestrator | 2026-01-02 00:02:38.699839 | orchestrator | + binding (known after apply) 2026-01-02 00:02:38.699844 | orchestrator | 2026-01-02 00:02:38.699850 | orchestrator | + fixed_ip { 2026-01-02 00:02:38.699856 | orchestrator | + ip_address = "192.168.16.14" 2026-01-02 00:02:38.699862 | orchestrator | + subnet_id = (known after apply) 2026-01-02 00:02:38.699867 | orchestrator | } 2026-01-02 00:02:38.699873 | orchestrator | } 2026-01-02 00:02:38.699879 | orchestrator | 2026-01-02 00:02:38.699885 | orchestrator | # openstack_networking_port_v2.node_port_management[5] will be created 2026-01-02 00:02:38.699891 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-01-02 00:02:38.699896 | orchestrator | + admin_state_up = (known after apply) 2026-01-02 00:02:38.699902 | orchestrator | + all_fixed_ips = (known after apply) 2026-01-02 00:02:38.699908 | orchestrator | + all_security_group_ids = (known after apply) 2026-01-02 00:02:38.699914 | orchestrator | + all_tags = (known after apply) 2026-01-02 00:02:38.699919 | orchestrator | + device_id = (known after apply) 2026-01-02 00:02:38.699925 | orchestrator | + device_owner = (known after apply) 2026-01-02 00:02:38.699931 | orchestrator | + dns_assignment = (known after apply) 2026-01-02 00:02:38.699937 | orchestrator | + dns_name = (known after apply) 2026-01-02 00:02:38.699942 | orchestrator | + id = (known after apply) 2026-01-02 00:02:38.699948 | orchestrator | + mac_address = (known after apply) 2026-01-02 00:02:38.699954 | orchestrator | + network_id = (known after apply) 2026-01-02 00:02:38.699960 | orchestrator | + port_security_enabled = (known after apply) 2026-01-02 00:02:38.699965 | orchestrator | + qos_policy_id = (known after apply) 2026-01-02 00:02:38.699975 | orchestrator | + region = (known after apply) 2026-01-02 00:02:38.699981 | orchestrator | + security_group_ids = (known after apply) 2026-01-02 00:02:38.699987 | orchestrator | + tenant_id = (known after apply) 2026-01-02 00:02:38.699993 | orchestrator | 2026-01-02 00:02:38.699998 | orchestrator | + allowed_address_pairs { 2026-01-02 00:02:38.700004 | orchestrator | + ip_address = "192.168.16.254/32" 2026-01-02 00:02:38.700010 | orchestrator | } 2026-01-02 00:02:38.700016 | orchestrator | + allowed_address_pairs { 2026-01-02 00:02:38.700021 | orchestrator | + ip_address = "192.168.16.8/32" 2026-01-02 00:02:38.700027 | orchestrator | } 2026-01-02 00:02:38.700033 | orchestrator | + allowed_address_pairs { 2026-01-02 00:02:38.700039 | orchestrator | + ip_address = "192.168.16.9/32" 2026-01-02 00:02:38.700044 | orchestrator | } 2026-01-02 00:02:38.700050 | orchestrator | 2026-01-02 00:02:38.700060 | orchestrator | + binding (known after apply) 2026-01-02 00:02:38.700065 | orchestrator | 2026-01-02 00:02:38.700071 | orchestrator | + fixed_ip { 2026-01-02 00:02:38.700077 | orchestrator | + ip_address = "192.168.16.15" 2026-01-02 00:02:38.700083 | orchestrator | + subnet_id = (known after apply) 2026-01-02 00:02:38.700088 | orchestrator | } 2026-01-02 00:02:38.700094 | orchestrator | } 2026-01-02 00:02:38.700100 | orchestrator | 2026-01-02 00:02:38.700106 | orchestrator | # openstack_networking_router_interface_v2.router_interface will be created 2026-01-02 00:02:38.700111 | orchestrator | + resource "openstack_networking_router_interface_v2" "router_interface" { 2026-01-02 00:02:38.700117 | orchestrator | + force_destroy = false 2026-01-02 00:02:38.700123 | orchestrator | + id = (known after apply) 2026-01-02 00:02:38.700129 | orchestrator | + port_id = (known after apply) 2026-01-02 00:02:38.700134 | orchestrator | + region = (known after apply) 2026-01-02 00:02:38.700140 | orchestrator | + router_id = (known after apply) 2026-01-02 00:02:38.700146 | orchestrator | + subnet_id = (known after apply) 2026-01-02 00:02:38.700152 | orchestrator | } 2026-01-02 00:02:38.700158 | orchestrator | 2026-01-02 00:02:38.700163 | orchestrator | # openstack_networking_router_v2.router will be created 2026-01-02 00:02:38.700169 | orchestrator | + resource "openstack_networking_router_v2" "router" { 2026-01-02 00:02:38.700175 | orchestrator | + admin_state_up = (known after apply) 2026-01-02 00:02:38.700181 | orchestrator | + all_tags = (known after apply) 2026-01-02 00:02:38.700186 | orchestrator | + availability_zone_hints = [ 2026-01-02 00:02:38.700192 | orchestrator | + "nova", 2026-01-02 00:02:38.700198 | orchestrator | ] 2026-01-02 00:02:38.700204 | orchestrator | + distributed = (known after apply) 2026-01-02 00:02:38.700209 | orchestrator | + enable_snat = (known after apply) 2026-01-02 00:02:38.700215 | orchestrator | + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2026-01-02 00:02:38.700221 | orchestrator | + external_qos_policy_id = (known after apply) 2026-01-02 00:02:38.700227 | orchestrator | + id = (known after apply) 2026-01-02 00:02:38.700233 | orchestrator | + name = "testbed" 2026-01-02 00:02:38.700238 | orchestrator | + region = (known after apply) 2026-01-02 00:02:38.700244 | orchestrator | + tenant_id = (known after apply) 2026-01-02 00:02:38.700250 | orchestrator | 2026-01-02 00:02:38.700256 | orchestrator | + external_fixed_ip (known after apply) 2026-01-02 00:02:38.700262 | orchestrator | } 2026-01-02 00:02:38.700268 | orchestrator | 2026-01-02 00:02:38.700273 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2026-01-02 00:02:38.700279 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2026-01-02 00:02:38.700285 | orchestrator | + description = "ssh" 2026-01-02 00:02:38.700291 | orchestrator | + direction = "ingress" 2026-01-02 00:02:38.700297 | orchestrator | + ethertype = "IPv4" 2026-01-02 00:02:38.700303 | orchestrator | + id = (known after apply) 2026-01-02 00:02:38.700308 | orchestrator | + port_range_max = 22 2026-01-02 00:02:38.700325 | orchestrator | + port_range_min = 22 2026-01-02 00:02:38.700331 | orchestrator | + protocol = "tcp" 2026-01-02 00:02:38.700337 | orchestrator | + region = (known after apply) 2026-01-02 00:02:38.700346 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-02 00:02:38.700352 | orchestrator | + remote_group_id = (known after apply) 2026-01-02 00:02:38.700358 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-01-02 00:02:38.700363 | orchestrator | + security_group_id = (known after apply) 2026-01-02 00:02:38.700369 | orchestrator | + tenant_id = (known after apply) 2026-01-02 00:02:38.700375 | orchestrator | } 2026-01-02 00:02:38.700380 | orchestrator | 2026-01-02 00:02:38.700386 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2026-01-02 00:02:38.700397 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2026-01-02 00:02:38.700403 | orchestrator | + description = "wireguard" 2026-01-02 00:02:38.700408 | orchestrator | + direction = "ingress" 2026-01-02 00:02:38.700414 | orchestrator | + ethertype = "IPv4" 2026-01-02 00:02:38.700420 | orchestrator | + id = (known after apply) 2026-01-02 00:02:38.700426 | orchestrator | + port_range_max = 51820 2026-01-02 00:02:38.700431 | orchestrator | + port_range_min = 51820 2026-01-02 00:02:38.700437 | orchestrator | + protocol = "udp" 2026-01-02 00:02:38.700443 | orchestrator | + region = (known after apply) 2026-01-02 00:02:38.700448 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-02 00:02:38.700454 | orchestrator | + remote_group_id = (known after apply) 2026-01-02 00:02:38.700460 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-01-02 00:02:38.700466 | orchestrator | + security_group_id = (known after apply) 2026-01-02 00:02:38.700471 | orchestrator | + tenant_id = (known after apply) 2026-01-02 00:02:38.700477 | orchestrator | } 2026-01-02 00:02:38.700483 | orchestrator | 2026-01-02 00:02:38.700489 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2026-01-02 00:02:38.700494 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2026-01-02 00:02:38.700500 | orchestrator | + direction = "ingress" 2026-01-02 00:02:38.700506 | orchestrator | + ethertype = "IPv4" 2026-01-02 00:02:38.700512 | orchestrator | + id = (known after apply) 2026-01-02 00:02:38.700517 | orchestrator | + protocol = "tcp" 2026-01-02 00:02:38.700523 | orchestrator | + region = (known after apply) 2026-01-02 00:02:38.700529 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-02 00:02:38.700534 | orchestrator | + remote_group_id = (known after apply) 2026-01-02 00:02:38.700540 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-01-02 00:02:38.700546 | orchestrator | + security_group_id = (known after apply) 2026-01-02 00:02:38.700551 | orchestrator | + tenant_id = (known after apply) 2026-01-02 00:02:38.700557 | orchestrator | } 2026-01-02 00:02:38.700563 | orchestrator | 2026-01-02 00:02:38.700569 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2026-01-02 00:02:38.700574 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2026-01-02 00:02:38.700580 | orchestrator | + direction = "ingress" 2026-01-02 00:02:38.700586 | orchestrator | + ethertype = "IPv4" 2026-01-02 00:02:38.700592 | orchestrator | + id = (known after apply) 2026-01-02 00:02:38.700597 | orchestrator | + protocol = "udp" 2026-01-02 00:02:38.700603 | orchestrator | + region = (known after apply) 2026-01-02 00:02:38.700609 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-02 00:02:38.700614 | orchestrator | + remote_group_id = (known after apply) 2026-01-02 00:02:38.700620 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-01-02 00:02:38.700626 | orchestrator | + security_group_id = (known after apply) 2026-01-02 00:02:38.700631 | orchestrator | + tenant_id = (known after apply) 2026-01-02 00:02:38.700637 | orchestrator | } 2026-01-02 00:02:38.700643 | orchestrator | 2026-01-02 00:02:38.700648 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2026-01-02 00:02:38.700658 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2026-01-02 00:02:38.700664 | orchestrator | + direction = "ingress" 2026-01-02 00:02:38.700670 | orchestrator | + ethertype = "IPv4" 2026-01-02 00:02:38.700675 | orchestrator | + id = (known after apply) 2026-01-02 00:02:38.700681 | orchestrator | + protocol = "icmp" 2026-01-02 00:02:38.700687 | orchestrator | + region = (known after apply) 2026-01-02 00:02:38.700692 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-02 00:02:38.700698 | orchestrator | + remote_group_id = (known after apply) 2026-01-02 00:02:38.700704 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-01-02 00:02:38.700710 | orchestrator | + security_group_id = (known after apply) 2026-01-02 00:02:38.700715 | orchestrator | + tenant_id = (known after apply) 2026-01-02 00:02:38.700721 | orchestrator | } 2026-01-02 00:02:38.700727 | orchestrator | 2026-01-02 00:02:38.700732 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2026-01-02 00:02:38.700738 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2026-01-02 00:02:38.700744 | orchestrator | + direction = "ingress" 2026-01-02 00:02:38.700749 | orchestrator | + ethertype = "IPv4" 2026-01-02 00:02:38.700755 | orchestrator | + id = (known after apply) 2026-01-02 00:02:38.700761 | orchestrator | + protocol = "tcp" 2026-01-02 00:02:38.700767 | orchestrator | + region = (known after apply) 2026-01-02 00:02:38.700772 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-02 00:02:38.700783 | orchestrator | + remote_group_id = (known after apply) 2026-01-02 00:02:38.700789 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-01-02 00:02:38.700795 | orchestrator | + security_group_id = (known after apply) 2026-01-02 00:02:38.700801 | orchestrator | + tenant_id = (known after apply) 2026-01-02 00:02:38.700806 | orchestrator | } 2026-01-02 00:02:38.700812 | orchestrator | 2026-01-02 00:02:38.700818 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2026-01-02 00:02:38.700824 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2026-01-02 00:02:38.700830 | orchestrator | + direction = "ingress" 2026-01-02 00:02:38.700835 | orchestrator | + ethertype = "IPv4" 2026-01-02 00:02:38.700841 | orchestrator | + id = (known after apply) 2026-01-02 00:02:38.700847 | orchestrator | + protocol = "udp" 2026-01-02 00:02:38.700853 | orchestrator | + region = (known after apply) 2026-01-02 00:02:38.700858 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-02 00:02:38.700864 | orchestrator | + remote_group_id = (known after apply) 2026-01-02 00:02:38.700870 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-01-02 00:02:38.700875 | orchestrator | + security_group_id = (known after apply) 2026-01-02 00:02:38.700881 | orchestrator | + tenant_id = (known after apply) 2026-01-02 00:02:38.700887 | orchestrator | } 2026-01-02 00:02:38.700892 | orchestrator | 2026-01-02 00:02:38.700903 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2026-01-02 00:02:38.700909 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2026-01-02 00:02:38.700915 | orchestrator | + direction = "ingress" 2026-01-02 00:02:38.700924 | orchestrator | + ethertype = "IPv4" 2026-01-02 00:02:38.700930 | orchestrator | + id = (known after apply) 2026-01-02 00:02:38.700936 | orchestrator | + protocol = "icmp" 2026-01-02 00:02:38.700941 | orchestrator | + region = (known after apply) 2026-01-02 00:02:38.700947 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-02 00:02:38.700953 | orchestrator | + remote_group_id = (known after apply) 2026-01-02 00:02:38.700959 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-01-02 00:02:38.700964 | orchestrator | + security_group_id = (known after apply) 2026-01-02 00:02:38.700970 | orchestrator | + tenant_id = (known after apply) 2026-01-02 00:02:38.700979 | orchestrator | } 2026-01-02 00:02:38.700985 | orchestrator | 2026-01-02 00:02:38.700991 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2026-01-02 00:02:38.700997 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2026-01-02 00:02:38.701003 | orchestrator | + description = "vrrp" 2026-01-02 00:02:38.701008 | orchestrator | + direction = "ingress" 2026-01-02 00:02:38.701014 | orchestrator | + ethertype = "IPv4" 2026-01-02 00:02:38.701020 | orchestrator | + id = (known after apply) 2026-01-02 00:02:38.701025 | orchestrator | + protocol = "112" 2026-01-02 00:02:38.701031 | orchestrator | + region = (known after apply) 2026-01-02 00:02:38.701037 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-02 00:02:38.701043 | orchestrator | + remote_group_id = (known after apply) 2026-01-02 00:02:38.701048 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-01-02 00:02:38.701054 | orchestrator | + security_group_id = (known after apply) 2026-01-02 00:02:38.701060 | orchestrator | + tenant_id = (known after apply) 2026-01-02 00:02:38.701065 | orchestrator | } 2026-01-02 00:02:38.701071 | orchestrator | 2026-01-02 00:02:38.701077 | orchestrator | # openstack_networking_secgroup_v2.security_group_management will be created 2026-01-02 00:02:38.701083 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_management" { 2026-01-02 00:02:38.701088 | orchestrator | + all_tags = (known after apply) 2026-01-02 00:02:38.701094 | orchestrator | + description = "management security group" 2026-01-02 00:02:38.701100 | orchestrator | + id = (known after apply) 2026-01-02 00:02:38.701105 | orchestrator | + name = "testbed-management" 2026-01-02 00:02:38.701111 | orchestrator | + region = (known after apply) 2026-01-02 00:02:38.701117 | orchestrator | + stateful = (known after apply) 2026-01-02 00:02:38.701122 | orchestrator | + tenant_id = (known after apply) 2026-01-02 00:02:38.701128 | orchestrator | } 2026-01-02 00:02:38.701134 | orchestrator | 2026-01-02 00:02:38.701140 | orchestrator | # openstack_networking_secgroup_v2.security_group_node will be created 2026-01-02 00:02:38.701145 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_node" { 2026-01-02 00:02:38.701151 | orchestrator | + all_tags = (known after apply) 2026-01-02 00:02:38.701157 | orchestrator | + description = "node security group" 2026-01-02 00:02:38.701162 | orchestrator | + id = (known after apply) 2026-01-02 00:02:38.701168 | orchestrator | + name = "testbed-node" 2026-01-02 00:02:38.701174 | orchestrator | + region = (known after apply) 2026-01-02 00:02:38.701179 | orchestrator | + stateful = (known after apply) 2026-01-02 00:02:38.701185 | orchestrator | + tenant_id = (known after apply) 2026-01-02 00:02:38.701191 | orchestrator | } 2026-01-02 00:02:38.701197 | orchestrator | 2026-01-02 00:02:38.701202 | orchestrator | # openstack_networking_subnet_v2.subnet_management will be created 2026-01-02 00:02:38.701208 | orchestrator | + resource "openstack_networking_subnet_v2" "subnet_management" { 2026-01-02 00:02:38.701214 | orchestrator | + all_tags = (known after apply) 2026-01-02 00:02:38.701219 | orchestrator | + cidr = "192.168.16.0/20" 2026-01-02 00:02:38.701225 | orchestrator | + dns_nameservers = [ 2026-01-02 00:02:38.701231 | orchestrator | + "8.8.8.8", 2026-01-02 00:02:38.701237 | orchestrator | + "9.9.9.9", 2026-01-02 00:02:38.701243 | orchestrator | ] 2026-01-02 00:02:38.701248 | orchestrator | + enable_dhcp = true 2026-01-02 00:02:38.701254 | orchestrator | + gateway_ip = (known after apply) 2026-01-02 00:02:38.701260 | orchestrator | + id = (known after apply) 2026-01-02 00:02:38.701266 | orchestrator | + ip_version = 4 2026-01-02 00:02:38.701271 | orchestrator | + ipv6_address_mode = (known after apply) 2026-01-02 00:02:38.701277 | orchestrator | + ipv6_ra_mode = (known after apply) 2026-01-02 00:02:38.701283 | orchestrator | + name = "subnet-testbed-management" 2026-01-02 00:02:38.701288 | orchestrator | + network_id = (known after apply) 2026-01-02 00:02:38.701294 | orchestrator | + no_gateway = false 2026-01-02 00:02:38.701300 | orchestrator | + region = (known after apply) 2026-01-02 00:02:38.701306 | orchestrator | + service_types = (known after apply) 2026-01-02 00:02:38.701330 | orchestrator | + tenant_id = (known after apply) 2026-01-02 00:02:38.701336 | orchestrator | 2026-01-02 00:02:38.701341 | orchestrator | + allocation_pool { 2026-01-02 00:02:38.701347 | orchestrator | + end = "192.168.31.250" 2026-01-02 00:02:38.701353 | orchestrator | + start = "192.168.31.200" 2026-01-02 00:02:38.701359 | orchestrator | } 2026-01-02 00:02:38.701364 | orchestrator | } 2026-01-02 00:02:38.701370 | orchestrator | 2026-01-02 00:02:38.701376 | orchestrator | # terraform_data.image will be created 2026-01-02 00:02:38.701382 | orchestrator | + resource "terraform_data" "image" { 2026-01-02 00:02:38.701387 | orchestrator | + id = (known after apply) 2026-01-02 00:02:38.701393 | orchestrator | + input = "Ubuntu 24.04" 2026-01-02 00:02:38.701399 | orchestrator | + output = (known after apply) 2026-01-02 00:02:38.701404 | orchestrator | } 2026-01-02 00:02:38.701410 | orchestrator | 2026-01-02 00:02:38.701416 | orchestrator | # terraform_data.image_node will be created 2026-01-02 00:02:38.701421 | orchestrator | + resource "terraform_data" "image_node" { 2026-01-02 00:02:38.701427 | orchestrator | + id = (known after apply) 2026-01-02 00:02:38.701433 | orchestrator | + input = "Ubuntu 24.04" 2026-01-02 00:02:38.701438 | orchestrator | + output = (known after apply) 2026-01-02 00:02:38.701444 | orchestrator | } 2026-01-02 00:02:38.701450 | orchestrator | 2026-01-02 00:02:38.701456 | orchestrator | Plan: 64 to add, 0 to change, 0 to destroy. 2026-01-02 00:02:38.701461 | orchestrator | 2026-01-02 00:02:38.701467 | orchestrator | Changes to Outputs: 2026-01-02 00:02:38.701473 | orchestrator | + manager_address = (sensitive value) 2026-01-02 00:02:38.701479 | orchestrator | + private_key = (sensitive value) 2026-01-02 00:02:38.838187 | orchestrator | terraform_data.image_node: Creating... 2026-01-02 00:02:38.840396 | orchestrator | terraform_data.image_node: Creation complete after 0s [id=8248cf95-a68c-de7b-066f-7af80bad5aec] 2026-01-02 00:02:38.960339 | orchestrator | terraform_data.image: Creating... 2026-01-02 00:02:38.960907 | orchestrator | terraform_data.image: Creation complete after 0s [id=63c5d67a-a814-fdff-1c4b-7c78ea7f43e8] 2026-01-02 00:02:38.979161 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2026-01-02 00:02:38.983173 | orchestrator | openstack_networking_network_v2.net_management: Creating... 2026-01-02 00:02:38.984250 | orchestrator | data.openstack_images_image_v2.image: Reading... 2026-01-02 00:02:38.985562 | orchestrator | openstack_compute_keypair_v2.key: Creating... 2026-01-02 00:02:38.987627 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2026-01-02 00:02:38.988544 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2026-01-02 00:02:38.988651 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2026-01-02 00:02:38.990939 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2026-01-02 00:02:38.991006 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2026-01-02 00:02:38.995196 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2026-01-02 00:02:39.487253 | orchestrator | data.openstack_images_image_v2.image: Read complete after 0s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-01-02 00:02:39.496537 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2026-01-02 00:02:39.550830 | orchestrator | openstack_compute_keypair_v2.key: Creation complete after 1s [id=testbed] 2026-01-02 00:02:39.558677 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2026-01-02 00:02:40.030813 | orchestrator | openstack_networking_network_v2.net_management: Creation complete after 1s [id=24675617-c215-43b4-a24d-29b5c2010075] 2026-01-02 00:02:40.033667 | orchestrator | data.openstack_images_image_v2.image_node: Reading... 2026-01-02 00:02:40.089929 | orchestrator | data.openstack_images_image_v2.image_node: Read complete after 0s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-01-02 00:02:40.102238 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2026-01-02 00:02:42.711019 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 4s [id=1ff5c3cd-25d4-4cde-ba93-f5b1aecf565f] 2026-01-02 00:02:42.728736 | orchestrator | local_file.id_rsa_pub: Creating... 2026-01-02 00:02:42.735188 | orchestrator | local_file.id_rsa_pub: Creation complete after 0s [id=1f7b9a5c2e885230287a0594f9325dd6499ac4de] 2026-01-02 00:02:42.736445 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 4s [id=d0e027c6-7483-4a58-a550-b5020c348e91] 2026-01-02 00:02:42.748994 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creating... 2026-01-02 00:02:42.752083 | orchestrator | local_sensitive_file.id_rsa: Creating... 2026-01-02 00:02:42.757343 | orchestrator | local_sensitive_file.id_rsa: Creation complete after 0s [id=c745ed14080fc8acafcf7ff83b9cf0a0d7fc5cc2] 2026-01-02 00:02:42.760190 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 4s [id=a863269e-8a4c-456a-8159-1ce463f39daf] 2026-01-02 00:02:42.766402 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2026-01-02 00:02:42.766936 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2026-01-02 00:02:42.785749 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 4s [id=ec88668e-8f3f-41b2-9ae8-d99cc1bb8c9e] 2026-01-02 00:02:42.795341 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2026-01-02 00:02:42.799266 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 4s [id=2fd5b446-fd37-4cff-9553-5df2f9404005] 2026-01-02 00:02:42.802458 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 4s [id=88e6ca38-e9bc-414f-be79-2564fe6ee507] 2026-01-02 00:02:42.803599 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2026-01-02 00:02:42.807990 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2026-01-02 00:02:42.829263 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 4s [id=26e4f97c-d63e-4b12-851b-95c853c7feee] 2026-01-02 00:02:42.835672 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2026-01-02 00:02:42.859604 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 4s [id=afdcae1f-177b-4712-b40b-94f97a828de8] 2026-01-02 00:02:42.891594 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 3s [id=610525bf-123e-48f5-8f72-a088231f73d4] 2026-01-02 00:02:43.514480 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 4s [id=50b2ec64-8433-40b2-a098-216db9da69f6] 2026-01-02 00:02:44.322989 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creation complete after 1s [id=43e59fd6-9065-4d70-81b2-50e21afed32b] 2026-01-02 00:02:44.331154 | orchestrator | openstack_networking_router_v2.router: Creating... 2026-01-02 00:02:46.208580 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 3s [id=33c14286-c543-4ffd-bb9f-b1db90f604b2] 2026-01-02 00:02:46.259461 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 3s [id=305350a8-4399-44d3-b2ea-bbf7b9eb7f90] 2026-01-02 00:02:46.296891 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 3s [id=0e3598a5-4260-4049-9c1b-f2c6dbf7b4cc] 2026-01-02 00:02:46.326425 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 3s [id=d753ec4e-79c3-49c7-ab6d-1296ff31fb58] 2026-01-02 00:02:46.328501 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 3s [id=bc584711-153a-497e-a318-857c0ea51dad] 2026-01-02 00:02:46.418766 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 3s [id=551f95f5-89d5-4c4c-89d2-f5559c6efa3b] 2026-01-02 00:02:48.573166 | orchestrator | openstack_networking_router_v2.router: Creation complete after 5s [id=d4a3824b-0cf0-4b49-a142-2be69c143142] 2026-01-02 00:02:48.577528 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creating... 2026-01-02 00:02:48.578414 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creating... 2026-01-02 00:02:48.578433 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creating... 2026-01-02 00:02:48.826700 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creation complete after 0s [id=7e114614-d5da-4803-9502-c5f1c7756fc2] 2026-01-02 00:02:48.832530 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2026-01-02 00:02:48.832979 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2026-01-02 00:02:48.833627 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2026-01-02 00:02:48.833887 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2026-01-02 00:02:48.834794 | orchestrator | openstack_networking_port_v2.manager_port_management: Creating... 2026-01-02 00:02:48.838262 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2026-01-02 00:02:49.016188 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 0s [id=ca989d5f-3618-4f6d-bb43-4c9dab8e883c] 2026-01-02 00:02:49.134556 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creation complete after 0s [id=11101238-0dae-4418-8918-16cb182b7201] 2026-01-02 00:02:49.141295 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2026-01-02 00:02:49.145969 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2026-01-02 00:02:49.146201 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2026-01-02 00:02:49.151623 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creating... 2026-01-02 00:02:49.194807 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 0s [id=fe40cc33-47e7-407d-9c4c-c509b86bb51e] 2026-01-02 00:02:49.208357 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creating... 2026-01-02 00:02:49.479655 | orchestrator | openstack_networking_port_v2.manager_port_management: Creation complete after 0s [id=6f163c71-4666-4b40-b875-bf05dd0c645f] 2026-01-02 00:02:49.907967 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creating... 2026-01-02 00:02:49.908060 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 1s [id=6c01b171-19f1-4549-bb11-52e4135ad743] 2026-01-02 00:02:49.908071 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creating... 2026-01-02 00:02:49.908079 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 1s [id=02b3f5d0-3ad0-4c65-91d6-f4fff4ff99fc] 2026-01-02 00:02:49.908087 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creating... 2026-01-02 00:02:50.022640 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 1s [id=c8059f9c-6f2b-4674-a64f-ac0252a51bd1] 2026-01-02 00:02:50.035663 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creating... 2026-01-02 00:02:50.074045 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creation complete after 1s [id=5aac6504-95f1-4c8c-9c86-f13aa5def162] 2026-01-02 00:02:50.080084 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2026-01-02 00:02:50.563261 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 2s [id=5f3da805-2386-4b81-a0be-699627c79e7a] 2026-01-02 00:02:50.706546 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creation complete after 1s [id=b0ec84eb-f186-440d-923c-61d5876f5b82] 2026-01-02 00:02:50.719273 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 2s [id=6e71d94f-464b-44cb-8022-468de650d06b] 2026-01-02 00:02:50.798659 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creation complete after 2s [id=1985c94b-6018-4e8c-b693-4df9fee85a72] 2026-01-02 00:02:50.841794 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creation complete after 1s [id=e7592525-89ea-4b7e-8149-ce80346863a3] 2026-01-02 00:02:51.147168 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 1s [id=46d83806-61a3-41fc-ba19-60d2c0214084] 2026-01-02 00:02:51.219731 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 2s [id=e7824c1c-a461-4203-8245-1a17c5855b00] 2026-01-02 00:02:51.244835 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creation complete after 1s [id=fdc8f55f-19e2-4c8d-bb27-d922cc2c9944] 2026-01-02 00:02:51.380183 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creation complete after 2s [id=3a64d59b-a4c3-431f-a092-a77c0ea78286] 2026-01-02 00:02:52.108581 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creation complete after 3s [id=2a62a1e7-9cee-4743-be75-610f90df6f85] 2026-01-02 00:02:52.124678 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2026-01-02 00:02:52.135911 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creating... 2026-01-02 00:02:52.145638 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creating... 2026-01-02 00:02:52.145960 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creating... 2026-01-02 00:02:52.146176 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creating... 2026-01-02 00:02:52.146590 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creating... 2026-01-02 00:02:52.167975 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creating... 2026-01-02 00:02:54.619642 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 3s [id=f597e3bd-e8c4-4c4e-880b-b449801b8033] 2026-01-02 00:02:54.627631 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2026-01-02 00:02:54.634848 | orchestrator | local_file.MANAGER_ADDRESS: Creating... 2026-01-02 00:02:54.635096 | orchestrator | local_file.inventory: Creating... 2026-01-02 00:02:54.640231 | orchestrator | local_file.inventory: Creation complete after 0s [id=a77ef94af5a818ae040b3ad55069e1129956feff] 2026-01-02 00:02:54.640554 | orchestrator | local_file.MANAGER_ADDRESS: Creation complete after 0s [id=d0c48119a0b37a7a18f6f83e33b298466055126e] 2026-01-02 00:02:55.507911 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 1s [id=f597e3bd-e8c4-4c4e-880b-b449801b8033] 2026-01-02 00:03:02.137484 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2026-01-02 00:03:02.147898 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2026-01-02 00:03:02.148001 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2026-01-02 00:03:02.148014 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2026-01-02 00:03:02.151288 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2026-01-02 00:03:02.170521 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2026-01-02 00:03:12.138098 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2026-01-02 00:03:12.148521 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2026-01-02 00:03:12.148600 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2026-01-02 00:03:12.148617 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2026-01-02 00:03:12.151749 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2026-01-02 00:03:12.171280 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2026-01-02 00:03:22.146327 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [30s elapsed] 2026-01-02 00:03:22.149465 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [30s elapsed] 2026-01-02 00:03:22.149506 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [30s elapsed] 2026-01-02 00:03:22.149519 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [30s elapsed] 2026-01-02 00:03:22.152638 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [30s elapsed] 2026-01-02 00:03:22.171941 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [30s elapsed] 2026-01-02 00:03:32.155669 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [40s elapsed] 2026-01-02 00:03:32.155847 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [40s elapsed] 2026-01-02 00:03:32.155865 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [40s elapsed] 2026-01-02 00:03:32.155877 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [40s elapsed] 2026-01-02 00:03:32.155888 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [40s elapsed] 2026-01-02 00:03:32.173243 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [40s elapsed] 2026-01-02 00:03:32.856643 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creation complete after 41s [id=dee75e0c-c1cd-4551-9f57-c8764a472a83] 2026-01-02 00:03:33.251436 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creation complete after 41s [id=0040abe8-0593-4340-88df-a7f75e12139c] 2026-01-02 00:03:33.451189 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creation complete after 41s [id=a8158562-6ad3-4038-be27-36b9df851f1c] 2026-01-02 00:03:42.164279 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [50s elapsed] 2026-01-02 00:03:42.164456 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [50s elapsed] 2026-01-02 00:03:42.173754 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [50s elapsed] 2026-01-02 00:03:43.088916 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creation complete after 51s [id=a7f6d397-709b-41fa-84e8-f03a5f5f1ef2] 2026-01-02 00:03:43.099724 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creation complete after 51s [id=a6887186-325e-453e-afdc-4ea28f1202d0] 2026-01-02 00:03:43.210845 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creation complete after 51s [id=7e69ef9f-997b-4053-8684-0824bd6b538e] 2026-01-02 00:03:43.244487 | orchestrator | null_resource.node_semaphore: Creating... 2026-01-02 00:03:43.248467 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2026-01-02 00:03:43.249170 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2026-01-02 00:03:43.253051 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2026-01-02 00:03:43.254491 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2026-01-02 00:03:43.255243 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2026-01-02 00:03:43.255358 | orchestrator | null_resource.node_semaphore: Creation complete after 0s [id=8875045804545648861] 2026-01-02 00:03:43.256096 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2026-01-02 00:03:43.256362 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2026-01-02 00:03:43.257233 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2026-01-02 00:03:43.265324 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2026-01-02 00:03:43.283283 | orchestrator | openstack_compute_instance_v2.manager_server: Creating... 2026-01-02 00:03:46.669425 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 4s [id=dee75e0c-c1cd-4551-9f57-c8764a472a83/88e6ca38-e9bc-414f-be79-2564fe6ee507] 2026-01-02 00:03:46.677008 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 4s [id=a6887186-325e-453e-afdc-4ea28f1202d0/ec88668e-8f3f-41b2-9ae8-d99cc1bb8c9e] 2026-01-02 00:03:46.685876 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 4s [id=a7f6d397-709b-41fa-84e8-f03a5f5f1ef2/1ff5c3cd-25d4-4cde-ba93-f5b1aecf565f] 2026-01-02 00:03:46.703605 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 4s [id=dee75e0c-c1cd-4551-9f57-c8764a472a83/d0e027c6-7483-4a58-a550-b5020c348e91] 2026-01-02 00:03:46.715504 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 4s [id=a7f6d397-709b-41fa-84e8-f03a5f5f1ef2/2fd5b446-fd37-4cff-9553-5df2f9404005] 2026-01-02 00:03:46.717376 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 4s [id=a6887186-325e-453e-afdc-4ea28f1202d0/afdcae1f-177b-4712-b40b-94f97a828de8] 2026-01-02 00:03:52.805185 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 10s [id=dee75e0c-c1cd-4551-9f57-c8764a472a83/610525bf-123e-48f5-8f72-a088231f73d4] 2026-01-02 00:03:52.827608 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 10s [id=a6887186-325e-453e-afdc-4ea28f1202d0/26e4f97c-d63e-4b12-851b-95c853c7feee] 2026-01-02 00:03:52.838402 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 10s [id=a7f6d397-709b-41fa-84e8-f03a5f5f1ef2/a863269e-8a4c-456a-8159-1ce463f39daf] 2026-01-02 00:03:53.286843 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2026-01-02 00:04:03.288894 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2026-01-02 00:04:03.638813 | orchestrator | openstack_compute_instance_v2.manager_server: Creation complete after 21s [id=5a1460d9-2dd7-49aa-a559-4264c2c74e26] 2026-01-02 00:04:03.703233 | orchestrator | 2026-01-02 00:04:03.703387 | orchestrator | Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2026-01-02 00:04:03.703403 | orchestrator | 2026-01-02 00:04:03.703413 | orchestrator | Outputs: 2026-01-02 00:04:03.703423 | orchestrator | 2026-01-02 00:04:03.703432 | orchestrator | manager_address = 2026-01-02 00:04:03.703441 | orchestrator | private_key = 2026-01-02 00:04:03.982017 | orchestrator | ok: Runtime: 0:01:30.732436 2026-01-02 00:04:04.024421 | 2026-01-02 00:04:04.024682 | TASK [Create infrastructure (stable)] 2026-01-02 00:04:04.560693 | orchestrator | skipping: Conditional result was False 2026-01-02 00:04:04.580433 | 2026-01-02 00:04:04.580605 | TASK [Fetch manager address] 2026-01-02 00:04:05.103232 | orchestrator | ok 2026-01-02 00:04:05.115456 | 2026-01-02 00:04:05.115620 | TASK [Set manager_host address] 2026-01-02 00:04:05.189701 | orchestrator | ok 2026-01-02 00:04:05.197006 | 2026-01-02 00:04:05.197145 | LOOP [Update ansible collections] 2026-01-02 00:04:06.212692 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-01-02 00:04:06.213346 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-01-02 00:04:06.213431 | orchestrator | Starting galaxy collection install process 2026-01-02 00:04:06.213475 | orchestrator | Process install dependency map 2026-01-02 00:04:06.213512 | orchestrator | Starting collection install process 2026-01-02 00:04:06.213546 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed05/.ansible/collections/ansible_collections/osism/commons' 2026-01-02 00:04:06.213590 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed05/.ansible/collections/ansible_collections/osism/commons 2026-01-02 00:04:06.213652 | orchestrator | osism.commons:999.0.0 was installed successfully 2026-01-02 00:04:06.213735 | orchestrator | ok: Item: commons Runtime: 0:00:00.658744 2026-01-02 00:04:07.141478 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-01-02 00:04:07.141641 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-01-02 00:04:07.142476 | orchestrator | Starting galaxy collection install process 2026-01-02 00:04:07.142533 | orchestrator | Process install dependency map 2026-01-02 00:04:07.142574 | orchestrator | Starting collection install process 2026-01-02 00:04:07.142611 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed05/.ansible/collections/ansible_collections/osism/services' 2026-01-02 00:04:07.142650 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed05/.ansible/collections/ansible_collections/osism/services 2026-01-02 00:04:07.142686 | orchestrator | osism.services:999.0.0 was installed successfully 2026-01-02 00:04:07.142742 | orchestrator | ok: Item: services Runtime: 0:00:00.662714 2026-01-02 00:04:07.152613 | 2026-01-02 00:04:07.152740 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-01-02 00:04:17.719436 | orchestrator | ok 2026-01-02 00:04:17.730056 | 2026-01-02 00:04:17.730212 | TASK [Wait a little longer for the manager so that everything is ready] 2026-01-02 00:05:17.782004 | orchestrator | ok 2026-01-02 00:05:17.793481 | 2026-01-02 00:05:17.793631 | TASK [Fetch manager ssh hostkey] 2026-01-02 00:05:19.373631 | orchestrator | Output suppressed because no_log was given 2026-01-02 00:05:19.388685 | 2026-01-02 00:05:19.388866 | TASK [Get ssh keypair from terraform environment] 2026-01-02 00:05:19.938779 | orchestrator | ok: Runtime: 0:00:00.010482 2026-01-02 00:05:19.955434 | 2026-01-02 00:05:19.955632 | TASK [Point out that the following task takes some time and does not give any output] 2026-01-02 00:05:20.001062 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2026-01-02 00:05:20.009005 | 2026-01-02 00:05:20.009193 | TASK [Run manager part 0] 2026-01-02 00:05:20.930664 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-01-02 00:05:20.981865 | orchestrator | 2026-01-02 00:05:20.981930 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2026-01-02 00:05:20.981943 | orchestrator | 2026-01-02 00:05:20.981966 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2026-01-02 00:05:22.653048 | orchestrator | ok: [testbed-manager] 2026-01-02 00:05:22.653123 | orchestrator | 2026-01-02 00:05:22.653154 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-01-02 00:05:22.653167 | orchestrator | 2026-01-02 00:05:22.653180 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-02 00:05:24.567368 | orchestrator | ok: [testbed-manager] 2026-01-02 00:05:24.567437 | orchestrator | 2026-01-02 00:05:24.567446 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-01-02 00:05:25.278841 | orchestrator | ok: [testbed-manager] 2026-01-02 00:05:25.278904 | orchestrator | 2026-01-02 00:05:25.278912 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-01-02 00:05:25.329513 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:05:25.329574 | orchestrator | 2026-01-02 00:05:25.329583 | orchestrator | TASK [Update package cache] **************************************************** 2026-01-02 00:05:25.364151 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:05:25.364208 | orchestrator | 2026-01-02 00:05:25.364215 | orchestrator | TASK [Install required packages] *********************************************** 2026-01-02 00:05:25.393902 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:05:25.393979 | orchestrator | 2026-01-02 00:05:25.393990 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-01-02 00:05:25.419753 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:05:25.419821 | orchestrator | 2026-01-02 00:05:25.419828 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-01-02 00:05:25.445914 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:05:25.445972 | orchestrator | 2026-01-02 00:05:25.445980 | orchestrator | TASK [Fail if Ubuntu version is lower than 24.04] ****************************** 2026-01-02 00:05:25.476066 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:05:25.476122 | orchestrator | 2026-01-02 00:05:25.476130 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2026-01-02 00:05:25.514475 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:05:25.514535 | orchestrator | 2026-01-02 00:05:25.514544 | orchestrator | TASK [Set APT options on manager] ********************************************** 2026-01-02 00:05:26.272822 | orchestrator | changed: [testbed-manager] 2026-01-02 00:05:26.272875 | orchestrator | 2026-01-02 00:05:26.272881 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2026-01-02 00:07:57.506203 | orchestrator | changed: [testbed-manager] 2026-01-02 00:07:57.506363 | orchestrator | 2026-01-02 00:07:57.506374 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-01-02 00:09:13.780638 | orchestrator | changed: [testbed-manager] 2026-01-02 00:09:13.780707 | orchestrator | 2026-01-02 00:09:13.780721 | orchestrator | TASK [Install required packages] *********************************************** 2026-01-02 00:09:35.970794 | orchestrator | changed: [testbed-manager] 2026-01-02 00:09:35.970877 | orchestrator | 2026-01-02 00:09:35.970890 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-01-02 00:09:45.726264 | orchestrator | changed: [testbed-manager] 2026-01-02 00:09:45.726367 | orchestrator | 2026-01-02 00:09:45.726385 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-01-02 00:09:45.772456 | orchestrator | ok: [testbed-manager] 2026-01-02 00:09:45.772539 | orchestrator | 2026-01-02 00:09:45.772554 | orchestrator | TASK [Get current user] ******************************************************** 2026-01-02 00:09:46.542006 | orchestrator | ok: [testbed-manager] 2026-01-02 00:09:46.542193 | orchestrator | 2026-01-02 00:09:46.542205 | orchestrator | TASK [Create venv directory] *************************************************** 2026-01-02 00:09:47.255849 | orchestrator | changed: [testbed-manager] 2026-01-02 00:09:47.255945 | orchestrator | 2026-01-02 00:09:47.255960 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2026-01-02 00:09:54.316308 | orchestrator | changed: [testbed-manager] 2026-01-02 00:09:54.316377 | orchestrator | 2026-01-02 00:09:54.316410 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2026-01-02 00:10:00.003273 | orchestrator | changed: [testbed-manager] 2026-01-02 00:10:00.003358 | orchestrator | 2026-01-02 00:10:00.003371 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2026-01-02 00:10:02.553816 | orchestrator | changed: [testbed-manager] 2026-01-02 00:10:02.553890 | orchestrator | 2026-01-02 00:10:02.553902 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2026-01-02 00:10:04.220640 | orchestrator | changed: [testbed-manager] 2026-01-02 00:10:04.220729 | orchestrator | 2026-01-02 00:10:04.220746 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2026-01-02 00:10:05.256718 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-01-02 00:10:05.256851 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-01-02 00:10:05.256864 | orchestrator | 2026-01-02 00:10:05.256871 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2026-01-02 00:10:05.297389 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-01-02 00:10:05.297433 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-01-02 00:10:05.297439 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-01-02 00:10:05.297444 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-01-02 00:10:10.544912 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-01-02 00:10:10.545017 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-01-02 00:10:10.545035 | orchestrator | 2026-01-02 00:10:10.545048 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2026-01-02 00:10:11.127519 | orchestrator | changed: [testbed-manager] 2026-01-02 00:10:11.127594 | orchestrator | 2026-01-02 00:10:11.127603 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2026-01-02 00:13:33.237638 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2026-01-02 00:13:33.237755 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2026-01-02 00:13:33.237774 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2026-01-02 00:13:33.237786 | orchestrator | 2026-01-02 00:13:33.237798 | orchestrator | TASK [Install local collections] *********************************************** 2026-01-02 00:13:35.485883 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2026-01-02 00:13:35.485973 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2026-01-02 00:13:35.485988 | orchestrator | 2026-01-02 00:13:35.486000 | orchestrator | PLAY [Create operator user] **************************************************** 2026-01-02 00:13:35.486012 | orchestrator | 2026-01-02 00:13:35.486099 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-02 00:13:36.821665 | orchestrator | ok: [testbed-manager] 2026-01-02 00:13:36.821755 | orchestrator | 2026-01-02 00:13:36.821774 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-01-02 00:13:36.867841 | orchestrator | ok: [testbed-manager] 2026-01-02 00:13:36.867920 | orchestrator | 2026-01-02 00:13:36.867934 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-01-02 00:13:36.934268 | orchestrator | ok: [testbed-manager] 2026-01-02 00:13:36.934353 | orchestrator | 2026-01-02 00:13:36.934369 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-01-02 00:13:37.769704 | orchestrator | changed: [testbed-manager] 2026-01-02 00:13:37.769782 | orchestrator | 2026-01-02 00:13:37.769795 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-01-02 00:13:38.507937 | orchestrator | changed: [testbed-manager] 2026-01-02 00:13:38.508052 | orchestrator | 2026-01-02 00:13:38.508070 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-01-02 00:13:39.844890 | orchestrator | changed: [testbed-manager] => (item=adm) 2026-01-02 00:13:39.844953 | orchestrator | changed: [testbed-manager] => (item=sudo) 2026-01-02 00:13:39.844961 | orchestrator | 2026-01-02 00:13:39.844980 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-01-02 00:13:41.208809 | orchestrator | changed: [testbed-manager] 2026-01-02 00:13:41.209049 | orchestrator | 2026-01-02 00:13:41.209066 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-01-02 00:13:42.941524 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2026-01-02 00:13:42.941612 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2026-01-02 00:13:42.941626 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2026-01-02 00:13:42.941636 | orchestrator | 2026-01-02 00:13:42.941647 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-01-02 00:13:43.004998 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:13:43.005062 | orchestrator | 2026-01-02 00:13:43.005071 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-01-02 00:13:43.079363 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:13:43.079479 | orchestrator | 2026-01-02 00:13:43.079509 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-01-02 00:13:43.637470 | orchestrator | changed: [testbed-manager] 2026-01-02 00:13:43.637570 | orchestrator | 2026-01-02 00:13:43.637585 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-01-02 00:13:43.709945 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:13:43.710102 | orchestrator | 2026-01-02 00:13:43.710124 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-01-02 00:13:44.571269 | orchestrator | changed: [testbed-manager] => (item=None) 2026-01-02 00:13:44.571378 | orchestrator | changed: [testbed-manager] 2026-01-02 00:13:44.571402 | orchestrator | 2026-01-02 00:13:44.571420 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-01-02 00:13:44.610340 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:13:44.610424 | orchestrator | 2026-01-02 00:13:44.610439 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-01-02 00:13:44.648453 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:13:44.648542 | orchestrator | 2026-01-02 00:13:44.648554 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-01-02 00:13:44.685240 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:13:44.685317 | orchestrator | 2026-01-02 00:13:44.685334 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-01-02 00:13:44.753317 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:13:44.753376 | orchestrator | 2026-01-02 00:13:44.753382 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-01-02 00:13:45.504290 | orchestrator | ok: [testbed-manager] 2026-01-02 00:13:45.504386 | orchestrator | 2026-01-02 00:13:45.504402 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-01-02 00:13:45.504415 | orchestrator | 2026-01-02 00:13:45.504426 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-02 00:13:46.850450 | orchestrator | ok: [testbed-manager] 2026-01-02 00:13:46.850521 | orchestrator | 2026-01-02 00:13:46.850531 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2026-01-02 00:13:47.792407 | orchestrator | changed: [testbed-manager] 2026-01-02 00:13:47.792501 | orchestrator | 2026-01-02 00:13:47.792518 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-02 00:13:47.792531 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=14 rescued=0 ignored=0 2026-01-02 00:13:47.792543 | orchestrator | 2026-01-02 00:13:48.319005 | orchestrator | ok: Runtime: 0:08:27.568209 2026-01-02 00:13:48.341609 | 2026-01-02 00:13:48.341796 | TASK [Point out that the log in on the manager is now possible] 2026-01-02 00:13:48.392656 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2026-01-02 00:13:48.402762 | 2026-01-02 00:13:48.402983 | TASK [Point out that the following task takes some time and does not give any output] 2026-01-02 00:13:48.454444 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2026-01-02 00:13:48.467029 | 2026-01-02 00:13:48.467193 | TASK [Run manager part 1 + 2] 2026-01-02 00:13:49.775918 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-01-02 00:13:49.831213 | orchestrator | 2026-01-02 00:13:49.831276 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2026-01-02 00:13:49.831288 | orchestrator | 2026-01-02 00:13:49.831306 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-02 00:13:52.635891 | orchestrator | ok: [testbed-manager] 2026-01-02 00:13:52.636012 | orchestrator | 2026-01-02 00:13:52.636064 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-01-02 00:13:52.677577 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:13:52.677632 | orchestrator | 2026-01-02 00:13:52.677643 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-01-02 00:13:52.731470 | orchestrator | ok: [testbed-manager] 2026-01-02 00:13:52.731531 | orchestrator | 2026-01-02 00:13:52.731542 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-01-02 00:13:52.771898 | orchestrator | ok: [testbed-manager] 2026-01-02 00:13:52.771949 | orchestrator | 2026-01-02 00:13:52.771957 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-01-02 00:13:52.843663 | orchestrator | ok: [testbed-manager] 2026-01-02 00:13:52.843821 | orchestrator | 2026-01-02 00:13:52.843830 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-01-02 00:13:52.920602 | orchestrator | ok: [testbed-manager] 2026-01-02 00:13:52.920660 | orchestrator | 2026-01-02 00:13:52.920670 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-01-02 00:13:52.973869 | orchestrator | included: /home/zuul-testbed05/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2026-01-02 00:13:52.973921 | orchestrator | 2026-01-02 00:13:52.973927 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-01-02 00:13:53.635660 | orchestrator | ok: [testbed-manager] 2026-01-02 00:13:53.635700 | orchestrator | 2026-01-02 00:13:53.635708 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-01-02 00:13:53.682188 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:13:53.682227 | orchestrator | 2026-01-02 00:13:53.682233 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-01-02 00:13:55.030328 | orchestrator | changed: [testbed-manager] 2026-01-02 00:13:55.030389 | orchestrator | 2026-01-02 00:13:55.030399 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-01-02 00:13:55.588819 | orchestrator | ok: [testbed-manager] 2026-01-02 00:13:55.588869 | orchestrator | 2026-01-02 00:13:55.588877 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-01-02 00:13:56.696577 | orchestrator | changed: [testbed-manager] 2026-01-02 00:13:56.696672 | orchestrator | 2026-01-02 00:13:56.696693 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-01-02 00:14:10.258826 | orchestrator | changed: [testbed-manager] 2026-01-02 00:14:10.258880 | orchestrator | 2026-01-02 00:14:10.258887 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-01-02 00:14:10.961077 | orchestrator | ok: [testbed-manager] 2026-01-02 00:14:10.961125 | orchestrator | 2026-01-02 00:14:10.961133 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-01-02 00:14:11.012093 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:14:11.012141 | orchestrator | 2026-01-02 00:14:11.012148 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2026-01-02 00:14:11.958535 | orchestrator | changed: [testbed-manager] 2026-01-02 00:14:11.958594 | orchestrator | 2026-01-02 00:14:11.958604 | orchestrator | TASK [Copy SSH private key] **************************************************** 2026-01-02 00:14:12.852164 | orchestrator | changed: [testbed-manager] 2026-01-02 00:14:12.852205 | orchestrator | 2026-01-02 00:14:12.852231 | orchestrator | TASK [Create configuration directory] ****************************************** 2026-01-02 00:14:13.375411 | orchestrator | changed: [testbed-manager] 2026-01-02 00:14:13.375507 | orchestrator | 2026-01-02 00:14:13.375530 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2026-01-02 00:14:13.414313 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-01-02 00:14:13.414433 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-01-02 00:14:13.414450 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-01-02 00:14:13.414462 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-01-02 00:14:15.794897 | orchestrator | changed: [testbed-manager] 2026-01-02 00:14:15.795002 | orchestrator | 2026-01-02 00:14:15.795048 | orchestrator | TASK [Install python requirements in venv] ************************************* 2026-01-02 00:14:26.084699 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2026-01-02 00:14:26.084767 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2026-01-02 00:14:26.084775 | orchestrator | ok: [testbed-manager] => (item=packaging) 2026-01-02 00:14:26.084781 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2026-01-02 00:14:26.084790 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2026-01-02 00:14:26.084795 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2026-01-02 00:14:26.084799 | orchestrator | 2026-01-02 00:14:26.084804 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2026-01-02 00:14:27.116632 | orchestrator | changed: [testbed-manager] 2026-01-02 00:14:27.116678 | orchestrator | 2026-01-02 00:14:27.116686 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2026-01-02 00:14:27.162210 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:14:27.162251 | orchestrator | 2026-01-02 00:14:27.162258 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2026-01-02 00:14:30.025897 | orchestrator | changed: [testbed-manager] 2026-01-02 00:14:30.025963 | orchestrator | 2026-01-02 00:14:30.025970 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2026-01-02 00:14:30.073146 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:14:30.073216 | orchestrator | 2026-01-02 00:14:30.073227 | orchestrator | TASK [Run manager part 2] ****************************************************** 2026-01-02 00:16:00.930288 | orchestrator | changed: [testbed-manager] 2026-01-02 00:16:00.930332 | orchestrator | 2026-01-02 00:16:00.930340 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-01-02 00:16:01.976294 | orchestrator | ok: [testbed-manager] 2026-01-02 00:16:01.976394 | orchestrator | 2026-01-02 00:16:01.976413 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-02 00:16:01.976427 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2026-01-02 00:16:01.976439 | orchestrator | 2026-01-02 00:16:02.163169 | orchestrator | ok: Runtime: 0:02:13.276643 2026-01-02 00:16:02.175692 | 2026-01-02 00:16:02.175836 | TASK [Reboot manager] 2026-01-02 00:16:03.710830 | orchestrator | ok: Runtime: 0:00:00.880093 2026-01-02 00:16:03.729504 | 2026-01-02 00:16:03.729672 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-01-02 00:16:17.666756 | orchestrator | ok 2026-01-02 00:16:17.676386 | 2026-01-02 00:16:17.676534 | TASK [Wait a little longer for the manager so that everything is ready] 2026-01-02 00:17:17.720818 | orchestrator | ok 2026-01-02 00:17:17.729116 | 2026-01-02 00:17:17.729242 | TASK [Deploy manager + bootstrap nodes] 2026-01-02 00:17:20.142167 | orchestrator | 2026-01-02 00:17:20.142364 | orchestrator | # DEPLOY MANAGER 2026-01-02 00:17:20.142388 | orchestrator | 2026-01-02 00:17:20.142404 | orchestrator | + set -e 2026-01-02 00:17:20.142418 | orchestrator | + echo 2026-01-02 00:17:20.142434 | orchestrator | + echo '# DEPLOY MANAGER' 2026-01-02 00:17:20.142452 | orchestrator | + echo 2026-01-02 00:17:20.142502 | orchestrator | + cat /opt/manager-vars.sh 2026-01-02 00:17:20.145561 | orchestrator | export NUMBER_OF_NODES=6 2026-01-02 00:17:20.145643 | orchestrator | 2026-01-02 00:17:20.145661 | orchestrator | export CEPH_VERSION=reef 2026-01-02 00:17:20.145676 | orchestrator | export CONFIGURATION_VERSION=main 2026-01-02 00:17:20.145691 | orchestrator | export MANAGER_VERSION=latest 2026-01-02 00:17:20.145720 | orchestrator | export OPENSTACK_VERSION=2025.1 2026-01-02 00:17:20.145732 | orchestrator | 2026-01-02 00:17:20.145751 | orchestrator | export ARA=false 2026-01-02 00:17:20.145763 | orchestrator | export DEPLOY_MODE=manager 2026-01-02 00:17:20.145781 | orchestrator | export TEMPEST=true 2026-01-02 00:17:20.145793 | orchestrator | export IS_ZUUL=true 2026-01-02 00:17:20.145805 | orchestrator | 2026-01-02 00:17:20.145823 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.159 2026-01-02 00:17:20.145835 | orchestrator | export EXTERNAL_API=false 2026-01-02 00:17:20.145847 | orchestrator | 2026-01-02 00:17:20.145858 | orchestrator | export IMAGE_USER=ubuntu 2026-01-02 00:17:20.145871 | orchestrator | export IMAGE_NODE_USER=ubuntu 2026-01-02 00:17:20.145883 | orchestrator | 2026-01-02 00:17:20.145894 | orchestrator | export CEPH_STACK=ceph-ansible 2026-01-02 00:17:20.145915 | orchestrator | 2026-01-02 00:17:20.145927 | orchestrator | + echo 2026-01-02 00:17:20.145940 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-01-02 00:17:20.146682 | orchestrator | ++ export INTERACTIVE=false 2026-01-02 00:17:20.146708 | orchestrator | ++ INTERACTIVE=false 2026-01-02 00:17:20.146721 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-01-02 00:17:20.146736 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-01-02 00:17:20.146858 | orchestrator | + source /opt/manager-vars.sh 2026-01-02 00:17:20.146874 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-01-02 00:17:20.146886 | orchestrator | ++ NUMBER_OF_NODES=6 2026-01-02 00:17:20.146897 | orchestrator | ++ export CEPH_VERSION=reef 2026-01-02 00:17:20.146962 | orchestrator | ++ CEPH_VERSION=reef 2026-01-02 00:17:20.147003 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-01-02 00:17:20.147024 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-01-02 00:17:20.147042 | orchestrator | ++ export MANAGER_VERSION=latest 2026-01-02 00:17:20.147061 | orchestrator | ++ MANAGER_VERSION=latest 2026-01-02 00:17:20.147072 | orchestrator | ++ export OPENSTACK_VERSION=2025.1 2026-01-02 00:17:20.147096 | orchestrator | ++ OPENSTACK_VERSION=2025.1 2026-01-02 00:17:20.147107 | orchestrator | ++ export ARA=false 2026-01-02 00:17:20.147119 | orchestrator | ++ ARA=false 2026-01-02 00:17:20.147130 | orchestrator | ++ export DEPLOY_MODE=manager 2026-01-02 00:17:20.147141 | orchestrator | ++ DEPLOY_MODE=manager 2026-01-02 00:17:20.147160 | orchestrator | ++ export TEMPEST=true 2026-01-02 00:17:20.147171 | orchestrator | ++ TEMPEST=true 2026-01-02 00:17:20.147183 | orchestrator | ++ export IS_ZUUL=true 2026-01-02 00:17:20.147194 | orchestrator | ++ IS_ZUUL=true 2026-01-02 00:17:20.147205 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.159 2026-01-02 00:17:20.147217 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.159 2026-01-02 00:17:20.147228 | orchestrator | ++ export EXTERNAL_API=false 2026-01-02 00:17:20.147245 | orchestrator | ++ EXTERNAL_API=false 2026-01-02 00:17:20.147257 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-01-02 00:17:20.147267 | orchestrator | ++ IMAGE_USER=ubuntu 2026-01-02 00:17:20.147279 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-01-02 00:17:20.147291 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-01-02 00:17:20.147302 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-01-02 00:17:20.147313 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-01-02 00:17:20.147325 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2026-01-02 00:17:20.203868 | orchestrator | + docker version 2026-01-02 00:17:20.470299 | orchestrator | Client: Docker Engine - Community 2026-01-02 00:17:20.470411 | orchestrator | Version: 27.5.1 2026-01-02 00:17:20.470429 | orchestrator | API version: 1.47 2026-01-02 00:17:20.470446 | orchestrator | Go version: go1.22.11 2026-01-02 00:17:20.470458 | orchestrator | Git commit: 9f9e405 2026-01-02 00:17:20.470469 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-01-02 00:17:20.470483 | orchestrator | OS/Arch: linux/amd64 2026-01-02 00:17:20.470494 | orchestrator | Context: default 2026-01-02 00:17:20.470506 | orchestrator | 2026-01-02 00:17:20.470518 | orchestrator | Server: Docker Engine - Community 2026-01-02 00:17:20.470530 | orchestrator | Engine: 2026-01-02 00:17:20.470542 | orchestrator | Version: 27.5.1 2026-01-02 00:17:20.470554 | orchestrator | API version: 1.47 (minimum version 1.24) 2026-01-02 00:17:20.470598 | orchestrator | Go version: go1.22.11 2026-01-02 00:17:20.470610 | orchestrator | Git commit: 4c9b3b0 2026-01-02 00:17:20.470622 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-01-02 00:17:20.470633 | orchestrator | OS/Arch: linux/amd64 2026-01-02 00:17:20.470644 | orchestrator | Experimental: false 2026-01-02 00:17:20.470655 | orchestrator | containerd: 2026-01-02 00:17:20.470666 | orchestrator | Version: v2.2.1 2026-01-02 00:17:20.470678 | orchestrator | GitCommit: dea7da592f5d1d2b7755e3a161be07f43fad8f75 2026-01-02 00:17:20.470689 | orchestrator | runc: 2026-01-02 00:17:20.470700 | orchestrator | Version: 1.3.4 2026-01-02 00:17:20.470712 | orchestrator | GitCommit: v1.3.4-0-gd6d73eb8 2026-01-02 00:17:20.470723 | orchestrator | docker-init: 2026-01-02 00:17:20.470734 | orchestrator | Version: 0.19.0 2026-01-02 00:17:20.470746 | orchestrator | GitCommit: de40ad0 2026-01-02 00:17:20.474161 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2026-01-02 00:17:20.482741 | orchestrator | + set -e 2026-01-02 00:17:20.482799 | orchestrator | + source /opt/manager-vars.sh 2026-01-02 00:17:20.482813 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-01-02 00:17:20.482826 | orchestrator | ++ NUMBER_OF_NODES=6 2026-01-02 00:17:20.482837 | orchestrator | ++ export CEPH_VERSION=reef 2026-01-02 00:17:20.482849 | orchestrator | ++ CEPH_VERSION=reef 2026-01-02 00:17:20.482861 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-01-02 00:17:20.482873 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-01-02 00:17:20.482884 | orchestrator | ++ export MANAGER_VERSION=latest 2026-01-02 00:17:20.482895 | orchestrator | ++ MANAGER_VERSION=latest 2026-01-02 00:17:20.482905 | orchestrator | ++ export OPENSTACK_VERSION=2025.1 2026-01-02 00:17:20.482916 | orchestrator | ++ OPENSTACK_VERSION=2025.1 2026-01-02 00:17:20.482927 | orchestrator | ++ export ARA=false 2026-01-02 00:17:20.482938 | orchestrator | ++ ARA=false 2026-01-02 00:17:20.482950 | orchestrator | ++ export DEPLOY_MODE=manager 2026-01-02 00:17:20.482961 | orchestrator | ++ DEPLOY_MODE=manager 2026-01-02 00:17:20.483000 | orchestrator | ++ export TEMPEST=true 2026-01-02 00:17:20.483012 | orchestrator | ++ TEMPEST=true 2026-01-02 00:17:20.483023 | orchestrator | ++ export IS_ZUUL=true 2026-01-02 00:17:20.483034 | orchestrator | ++ IS_ZUUL=true 2026-01-02 00:17:20.483045 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.159 2026-01-02 00:17:20.483057 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.159 2026-01-02 00:17:20.483068 | orchestrator | ++ export EXTERNAL_API=false 2026-01-02 00:17:20.483079 | orchestrator | ++ EXTERNAL_API=false 2026-01-02 00:17:20.483090 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-01-02 00:17:20.483101 | orchestrator | ++ IMAGE_USER=ubuntu 2026-01-02 00:17:20.483112 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-01-02 00:17:20.483122 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-01-02 00:17:20.483134 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-01-02 00:17:20.483145 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-01-02 00:17:20.483156 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-01-02 00:17:20.483167 | orchestrator | ++ export INTERACTIVE=false 2026-01-02 00:17:20.483177 | orchestrator | ++ INTERACTIVE=false 2026-01-02 00:17:20.483188 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-01-02 00:17:20.483205 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-01-02 00:17:20.483216 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-01-02 00:17:20.483227 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-01-02 00:17:20.483238 | orchestrator | + /opt/configuration/scripts/set-ceph-version.sh reef 2026-01-02 00:17:20.489362 | orchestrator | + set -e 2026-01-02 00:17:20.489410 | orchestrator | + VERSION=reef 2026-01-02 00:17:20.490330 | orchestrator | ++ grep '^ceph_version:' /opt/configuration/environments/manager/configuration.yml 2026-01-02 00:17:20.495570 | orchestrator | + [[ -n ceph_version: reef ]] 2026-01-02 00:17:20.495597 | orchestrator | + sed -i 's/ceph_version: .*/ceph_version: reef/g' /opt/configuration/environments/manager/configuration.yml 2026-01-02 00:17:20.501051 | orchestrator | + /opt/configuration/scripts/set-openstack-version.sh 2025.1 2026-01-02 00:17:20.507440 | orchestrator | + set -e 2026-01-02 00:17:20.507478 | orchestrator | + VERSION=2025.1 2026-01-02 00:17:20.507901 | orchestrator | ++ grep '^openstack_version:' /opt/configuration/environments/manager/configuration.yml 2026-01-02 00:17:20.511636 | orchestrator | + [[ -n openstack_version: 2024.2 ]] 2026-01-02 00:17:20.511687 | orchestrator | + sed -i 's/openstack_version: .*/openstack_version: 2025.1/g' /opt/configuration/environments/manager/configuration.yml 2026-01-02 00:17:20.516074 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2026-01-02 00:17:20.516841 | orchestrator | ++ semver latest 7.0.0 2026-01-02 00:17:20.576243 | orchestrator | + [[ -1 -ge 0 ]] 2026-01-02 00:17:20.576332 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-01-02 00:17:20.576348 | orchestrator | + echo 'enable_osism_kubernetes: true' 2026-01-02 00:17:20.576796 | orchestrator | ++ semver latest 10.0.0-0 2026-01-02 00:17:20.631616 | orchestrator | + [[ -1 -ge 0 ]] 2026-01-02 00:17:20.632196 | orchestrator | ++ semver 2025.1 2025.1 2026-01-02 00:17:20.707420 | orchestrator | + [[ 0 -ge 0 ]] 2026-01-02 00:17:20.707504 | orchestrator | + sed -i '/^om_enable_rabbitmq_high_availability:/d' /opt/configuration/environments/kolla/configuration.yml 2026-01-02 00:17:20.713103 | orchestrator | + sed -i '/^om_enable_rabbitmq_quorum_queues:/d' /opt/configuration/environments/kolla/configuration.yml 2026-01-02 00:17:20.717314 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2026-01-02 00:17:20.802392 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-01-02 00:17:20.803174 | orchestrator | + source /opt/venv/bin/activate 2026-01-02 00:17:20.804225 | orchestrator | ++ deactivate nondestructive 2026-01-02 00:17:20.804247 | orchestrator | ++ '[' -n '' ']' 2026-01-02 00:17:20.804304 | orchestrator | ++ '[' -n '' ']' 2026-01-02 00:17:20.804319 | orchestrator | ++ hash -r 2026-01-02 00:17:20.804331 | orchestrator | ++ '[' -n '' ']' 2026-01-02 00:17:20.804343 | orchestrator | ++ unset VIRTUAL_ENV 2026-01-02 00:17:20.804354 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-01-02 00:17:20.804454 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-01-02 00:17:20.804686 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-01-02 00:17:20.804798 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-01-02 00:17:20.804822 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-01-02 00:17:20.804841 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-01-02 00:17:20.804862 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-01-02 00:17:20.804930 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-01-02 00:17:20.804945 | orchestrator | ++ export PATH 2026-01-02 00:17:20.804957 | orchestrator | ++ '[' -n '' ']' 2026-01-02 00:17:20.805016 | orchestrator | ++ '[' -z '' ']' 2026-01-02 00:17:20.805030 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-01-02 00:17:20.805042 | orchestrator | ++ PS1='(venv) ' 2026-01-02 00:17:20.805054 | orchestrator | ++ export PS1 2026-01-02 00:17:20.805065 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-01-02 00:17:20.805081 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-01-02 00:17:20.805093 | orchestrator | ++ hash -r 2026-01-02 00:17:20.805283 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2026-01-02 00:17:21.944782 | orchestrator | 2026-01-02 00:17:21.944894 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2026-01-02 00:17:21.944910 | orchestrator | 2026-01-02 00:17:21.944922 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-01-02 00:17:22.481809 | orchestrator | ok: [testbed-manager] 2026-01-02 00:17:22.481998 | orchestrator | 2026-01-02 00:17:22.482070 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-01-02 00:17:23.408660 | orchestrator | changed: [testbed-manager] 2026-01-02 00:17:23.408774 | orchestrator | 2026-01-02 00:17:23.408795 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2026-01-02 00:17:23.408809 | orchestrator | 2026-01-02 00:17:23.408821 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-02 00:17:25.592638 | orchestrator | ok: [testbed-manager] 2026-01-02 00:17:25.592757 | orchestrator | 2026-01-02 00:17:25.592775 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2026-01-02 00:17:25.638525 | orchestrator | ok: [testbed-manager] 2026-01-02 00:17:25.638640 | orchestrator | 2026-01-02 00:17:25.638659 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2026-01-02 00:17:26.086253 | orchestrator | changed: [testbed-manager] 2026-01-02 00:17:26.087116 | orchestrator | 2026-01-02 00:17:26.087154 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2026-01-02 00:17:26.115608 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:17:26.115703 | orchestrator | 2026-01-02 00:17:26.115719 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-01-02 00:17:26.448910 | orchestrator | changed: [testbed-manager] 2026-01-02 00:17:26.449082 | orchestrator | 2026-01-02 00:17:26.449100 | orchestrator | TASK [Use insecure glance configuration] *************************************** 2026-01-02 00:17:26.511073 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:17:26.511148 | orchestrator | 2026-01-02 00:17:26.511155 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2026-01-02 00:17:26.851399 | orchestrator | ok: [testbed-manager] 2026-01-02 00:17:26.851508 | orchestrator | 2026-01-02 00:17:26.851525 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2026-01-02 00:17:26.977340 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:17:26.977447 | orchestrator | 2026-01-02 00:17:26.977466 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2026-01-02 00:17:26.977479 | orchestrator | 2026-01-02 00:17:26.977491 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-02 00:17:28.617231 | orchestrator | ok: [testbed-manager] 2026-01-02 00:17:28.617348 | orchestrator | 2026-01-02 00:17:28.617368 | orchestrator | TASK [Apply traefik role] ****************************************************** 2026-01-02 00:17:28.709132 | orchestrator | included: osism.services.traefik for testbed-manager 2026-01-02 00:17:28.709238 | orchestrator | 2026-01-02 00:17:28.709256 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2026-01-02 00:17:28.760330 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2026-01-02 00:17:28.760429 | orchestrator | 2026-01-02 00:17:28.760444 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2026-01-02 00:17:29.854183 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2026-01-02 00:17:29.854294 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2026-01-02 00:17:29.854308 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2026-01-02 00:17:29.854320 | orchestrator | 2026-01-02 00:17:29.854331 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2026-01-02 00:17:31.568485 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2026-01-02 00:17:31.568617 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2026-01-02 00:17:31.568635 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2026-01-02 00:17:31.568662 | orchestrator | 2026-01-02 00:17:31.568719 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2026-01-02 00:17:32.165192 | orchestrator | changed: [testbed-manager] => (item=None) 2026-01-02 00:17:32.165288 | orchestrator | changed: [testbed-manager] 2026-01-02 00:17:32.165302 | orchestrator | 2026-01-02 00:17:32.165314 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2026-01-02 00:17:32.772181 | orchestrator | changed: [testbed-manager] => (item=None) 2026-01-02 00:17:32.772308 | orchestrator | changed: [testbed-manager] 2026-01-02 00:17:32.772336 | orchestrator | 2026-01-02 00:17:32.772355 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2026-01-02 00:17:32.818647 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:17:32.818736 | orchestrator | 2026-01-02 00:17:32.818748 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2026-01-02 00:17:33.163287 | orchestrator | ok: [testbed-manager] 2026-01-02 00:17:33.163414 | orchestrator | 2026-01-02 00:17:33.163440 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2026-01-02 00:17:33.225795 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2026-01-02 00:17:33.225880 | orchestrator | 2026-01-02 00:17:33.225891 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2026-01-02 00:17:34.257222 | orchestrator | changed: [testbed-manager] 2026-01-02 00:17:34.257363 | orchestrator | 2026-01-02 00:17:34.257394 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2026-01-02 00:17:35.021304 | orchestrator | changed: [testbed-manager] 2026-01-02 00:17:35.021405 | orchestrator | 2026-01-02 00:17:35.021423 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2026-01-02 00:17:56.376585 | orchestrator | changed: [testbed-manager] 2026-01-02 00:17:56.376712 | orchestrator | 2026-01-02 00:17:56.376741 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2026-01-02 00:17:56.426789 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:17:56.426875 | orchestrator | 2026-01-02 00:17:56.426889 | orchestrator | PLAY [Deploy manager service] ************************************************** 2026-01-02 00:17:56.426902 | orchestrator | 2026-01-02 00:17:56.426913 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-02 00:17:58.185180 | orchestrator | ok: [testbed-manager] 2026-01-02 00:17:58.185282 | orchestrator | 2026-01-02 00:17:58.185298 | orchestrator | TASK [Apply manager role] ****************************************************** 2026-01-02 00:17:58.292227 | orchestrator | included: osism.services.manager for testbed-manager 2026-01-02 00:17:58.292323 | orchestrator | 2026-01-02 00:17:58.292338 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2026-01-02 00:17:58.349160 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2026-01-02 00:17:58.349265 | orchestrator | 2026-01-02 00:17:58.349284 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2026-01-02 00:18:00.775949 | orchestrator | ok: [testbed-manager] 2026-01-02 00:18:00.776109 | orchestrator | 2026-01-02 00:18:00.776128 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2026-01-02 00:18:00.825038 | orchestrator | ok: [testbed-manager] 2026-01-02 00:18:00.825142 | orchestrator | 2026-01-02 00:18:00.825160 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2026-01-02 00:18:00.954426 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2026-01-02 00:18:00.954521 | orchestrator | 2026-01-02 00:18:00.954535 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2026-01-02 00:18:03.618790 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2026-01-02 00:18:03.619045 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2026-01-02 00:18:03.619067 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2026-01-02 00:18:03.619080 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2026-01-02 00:18:03.619092 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2026-01-02 00:18:03.619105 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2026-01-02 00:18:03.619117 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2026-01-02 00:18:03.619128 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2026-01-02 00:18:03.619139 | orchestrator | 2026-01-02 00:18:03.619152 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2026-01-02 00:18:04.208869 | orchestrator | changed: [testbed-manager] 2026-01-02 00:18:04.209001 | orchestrator | 2026-01-02 00:18:04.209020 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2026-01-02 00:18:04.796575 | orchestrator | changed: [testbed-manager] 2026-01-02 00:18:04.796666 | orchestrator | 2026-01-02 00:18:04.796679 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2026-01-02 00:18:04.859508 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2026-01-02 00:18:04.859608 | orchestrator | 2026-01-02 00:18:04.859624 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2026-01-02 00:18:05.976715 | orchestrator | changed: [testbed-manager] => (item=ara) 2026-01-02 00:18:05.976819 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2026-01-02 00:18:05.976836 | orchestrator | 2026-01-02 00:18:05.976850 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2026-01-02 00:18:06.574569 | orchestrator | changed: [testbed-manager] 2026-01-02 00:18:06.574684 | orchestrator | 2026-01-02 00:18:06.574702 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2026-01-02 00:18:06.628421 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:18:06.628497 | orchestrator | 2026-01-02 00:18:06.628504 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2026-01-02 00:18:06.705019 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2026-01-02 00:18:06.705163 | orchestrator | 2026-01-02 00:18:06.705186 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2026-01-02 00:18:07.288639 | orchestrator | changed: [testbed-manager] 2026-01-02 00:18:07.289420 | orchestrator | 2026-01-02 00:18:07.289469 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2026-01-02 00:18:07.352225 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2026-01-02 00:18:07.352328 | orchestrator | 2026-01-02 00:18:07.352347 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2026-01-02 00:18:08.668912 | orchestrator | changed: [testbed-manager] => (item=None) 2026-01-02 00:18:08.669093 | orchestrator | changed: [testbed-manager] => (item=None) 2026-01-02 00:18:08.669120 | orchestrator | changed: [testbed-manager] 2026-01-02 00:18:08.669141 | orchestrator | 2026-01-02 00:18:08.669163 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2026-01-02 00:18:09.273435 | orchestrator | changed: [testbed-manager] 2026-01-02 00:18:09.273541 | orchestrator | 2026-01-02 00:18:09.273567 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2026-01-02 00:18:09.328173 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:18:09.328259 | orchestrator | 2026-01-02 00:18:09.328296 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2026-01-02 00:18:09.420482 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2026-01-02 00:18:09.420583 | orchestrator | 2026-01-02 00:18:09.420598 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2026-01-02 00:18:09.923359 | orchestrator | changed: [testbed-manager] 2026-01-02 00:18:09.923479 | orchestrator | 2026-01-02 00:18:09.923498 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2026-01-02 00:18:10.321412 | orchestrator | changed: [testbed-manager] 2026-01-02 00:18:10.321495 | orchestrator | 2026-01-02 00:18:10.321507 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2026-01-02 00:18:11.456749 | orchestrator | changed: [testbed-manager] => (item=conductor) 2026-01-02 00:18:11.456864 | orchestrator | changed: [testbed-manager] => (item=openstack) 2026-01-02 00:18:11.456882 | orchestrator | 2026-01-02 00:18:11.456896 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2026-01-02 00:18:12.068605 | orchestrator | changed: [testbed-manager] 2026-01-02 00:18:12.068742 | orchestrator | 2026-01-02 00:18:12.068771 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2026-01-02 00:18:12.449863 | orchestrator | ok: [testbed-manager] 2026-01-02 00:18:12.450009 | orchestrator | 2026-01-02 00:18:12.450103 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2026-01-02 00:18:12.782462 | orchestrator | changed: [testbed-manager] 2026-01-02 00:18:12.782565 | orchestrator | 2026-01-02 00:18:12.782583 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2026-01-02 00:18:12.828618 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:18:12.828803 | orchestrator | 2026-01-02 00:18:12.828823 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2026-01-02 00:18:12.890271 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2026-01-02 00:18:12.890364 | orchestrator | 2026-01-02 00:18:12.890378 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2026-01-02 00:18:12.937893 | orchestrator | ok: [testbed-manager] 2026-01-02 00:18:12.938088 | orchestrator | 2026-01-02 00:18:12.938109 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2026-01-02 00:18:14.852223 | orchestrator | changed: [testbed-manager] => (item=osism) 2026-01-02 00:18:14.852337 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2026-01-02 00:18:14.852356 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2026-01-02 00:18:14.852368 | orchestrator | 2026-01-02 00:18:14.852392 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2026-01-02 00:18:15.538352 | orchestrator | changed: [testbed-manager] 2026-01-02 00:18:15.538491 | orchestrator | 2026-01-02 00:18:15.538511 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2026-01-02 00:18:16.221462 | orchestrator | changed: [testbed-manager] 2026-01-02 00:18:16.221577 | orchestrator | 2026-01-02 00:18:16.221596 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2026-01-02 00:18:16.901193 | orchestrator | changed: [testbed-manager] 2026-01-02 00:18:16.901272 | orchestrator | 2026-01-02 00:18:16.901280 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2026-01-02 00:18:16.969707 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2026-01-02 00:18:16.969778 | orchestrator | 2026-01-02 00:18:16.969786 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2026-01-02 00:18:17.008137 | orchestrator | ok: [testbed-manager] 2026-01-02 00:18:17.008223 | orchestrator | 2026-01-02 00:18:17.008236 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2026-01-02 00:18:17.702308 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2026-01-02 00:18:17.702421 | orchestrator | 2026-01-02 00:18:17.702438 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2026-01-02 00:18:17.776139 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2026-01-02 00:18:17.776240 | orchestrator | 2026-01-02 00:18:17.776255 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2026-01-02 00:18:18.472618 | orchestrator | changed: [testbed-manager] 2026-01-02 00:18:18.472717 | orchestrator | 2026-01-02 00:18:18.472732 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2026-01-02 00:18:19.027250 | orchestrator | ok: [testbed-manager] 2026-01-02 00:18:19.027362 | orchestrator | 2026-01-02 00:18:19.027381 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2026-01-02 00:18:19.071687 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:18:19.071788 | orchestrator | 2026-01-02 00:18:19.071804 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2026-01-02 00:18:19.128685 | orchestrator | ok: [testbed-manager] 2026-01-02 00:18:19.128781 | orchestrator | 2026-01-02 00:18:19.128797 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2026-01-02 00:18:19.945392 | orchestrator | changed: [testbed-manager] 2026-01-02 00:18:19.945502 | orchestrator | 2026-01-02 00:18:19.945520 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2026-01-02 00:19:22.128761 | orchestrator | changed: [testbed-manager] 2026-01-02 00:19:22.128885 | orchestrator | 2026-01-02 00:19:22.128903 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2026-01-02 00:19:23.029073 | orchestrator | ok: [testbed-manager] 2026-01-02 00:19:23.029175 | orchestrator | 2026-01-02 00:19:23.029212 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2026-01-02 00:19:23.080272 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:19:23.080392 | orchestrator | 2026-01-02 00:19:23.080420 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2026-01-02 00:19:25.218704 | orchestrator | changed: [testbed-manager] 2026-01-02 00:19:25.218856 | orchestrator | 2026-01-02 00:19:25.218892 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2026-01-02 00:19:25.268134 | orchestrator | ok: [testbed-manager] 2026-01-02 00:19:25.268227 | orchestrator | 2026-01-02 00:19:25.268243 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-01-02 00:19:25.268256 | orchestrator | 2026-01-02 00:19:25.268268 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2026-01-02 00:19:25.309502 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:19:25.309620 | orchestrator | 2026-01-02 00:19:25.309640 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2026-01-02 00:20:25.360387 | orchestrator | Pausing for 60 seconds 2026-01-02 00:20:25.360482 | orchestrator | changed: [testbed-manager] 2026-01-02 00:20:25.360493 | orchestrator | 2026-01-02 00:20:25.360503 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2026-01-02 00:20:27.908568 | orchestrator | changed: [testbed-manager] 2026-01-02 00:20:27.908694 | orchestrator | 2026-01-02 00:20:27.908712 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2026-01-02 00:21:09.367755 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2026-01-02 00:21:09.367935 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2026-01-02 00:21:09.368046 | orchestrator | changed: [testbed-manager] 2026-01-02 00:21:09.368072 | orchestrator | 2026-01-02 00:21:09.368093 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2026-01-02 00:21:19.290394 | orchestrator | changed: [testbed-manager] 2026-01-02 00:21:19.290527 | orchestrator | 2026-01-02 00:21:19.290553 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2026-01-02 00:21:19.373659 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2026-01-02 00:21:19.373744 | orchestrator | 2026-01-02 00:21:19.373756 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-01-02 00:21:19.373766 | orchestrator | 2026-01-02 00:21:19.373775 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2026-01-02 00:21:19.427255 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:21:19.427336 | orchestrator | 2026-01-02 00:21:19.427351 | orchestrator | TASK [osism.services.manager : Include version verification tasks] ************* 2026-01-02 00:21:19.500484 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/verify-versions.yml for testbed-manager 2026-01-02 00:21:19.500579 | orchestrator | 2026-01-02 00:21:19.500594 | orchestrator | TASK [osism.services.manager : Deploy service manager version check script] **** 2026-01-02 00:21:20.242307 | orchestrator | changed: [testbed-manager] 2026-01-02 00:21:20.242416 | orchestrator | 2026-01-02 00:21:20.242433 | orchestrator | TASK [osism.services.manager : Execute service manager version check] ********** 2026-01-02 00:21:23.123265 | orchestrator | ok: [testbed-manager] 2026-01-02 00:21:23.123348 | orchestrator | 2026-01-02 00:21:23.123357 | orchestrator | TASK [osism.services.manager : Display version check results] ****************** 2026-01-02 00:21:23.196986 | orchestrator | ok: [testbed-manager] => { 2026-01-02 00:21:23.197095 | orchestrator | "version_check_result.stdout_lines": [ 2026-01-02 00:21:23.197113 | orchestrator | "=== OSISM Container Version Check ===", 2026-01-02 00:21:23.197126 | orchestrator | "Checking running containers against expected versions...", 2026-01-02 00:21:23.197139 | orchestrator | "", 2026-01-02 00:21:23.197151 | orchestrator | "Checking service: inventory_reconciler (Inventory Reconciler Service)", 2026-01-02 00:21:23.197163 | orchestrator | " Expected: registry.osism.tech/osism/inventory-reconciler:latest", 2026-01-02 00:21:23.197174 | orchestrator | " Enabled: true", 2026-01-02 00:21:23.197186 | orchestrator | " Running: registry.osism.tech/osism/inventory-reconciler:latest", 2026-01-02 00:21:23.197197 | orchestrator | " Status: ✅ MATCH", 2026-01-02 00:21:23.197208 | orchestrator | "", 2026-01-02 00:21:23.197220 | orchestrator | "Checking service: osism-ansible (OSISM Ansible Service)", 2026-01-02 00:21:23.197231 | orchestrator | " Expected: registry.osism.tech/osism/osism-ansible:latest", 2026-01-02 00:21:23.197242 | orchestrator | " Enabled: true", 2026-01-02 00:21:23.197253 | orchestrator | " Running: registry.osism.tech/osism/osism-ansible:latest", 2026-01-02 00:21:23.197264 | orchestrator | " Status: ✅ MATCH", 2026-01-02 00:21:23.197276 | orchestrator | "", 2026-01-02 00:21:23.197286 | orchestrator | "Checking service: osism-kubernetes (Osism-Kubernetes Service)", 2026-01-02 00:21:23.197297 | orchestrator | " Expected: registry.osism.tech/osism/osism-kubernetes:latest", 2026-01-02 00:21:23.197308 | orchestrator | " Enabled: true", 2026-01-02 00:21:23.197320 | orchestrator | " Running: registry.osism.tech/osism/osism-kubernetes:latest", 2026-01-02 00:21:23.197331 | orchestrator | " Status: ✅ MATCH", 2026-01-02 00:21:23.197342 | orchestrator | "", 2026-01-02 00:21:23.197353 | orchestrator | "Checking service: ceph-ansible (Ceph-Ansible Service)", 2026-01-02 00:21:23.197365 | orchestrator | " Expected: registry.osism.tech/osism/ceph-ansible:reef", 2026-01-02 00:21:23.197402 | orchestrator | " Enabled: true", 2026-01-02 00:21:23.197414 | orchestrator | " Running: registry.osism.tech/osism/ceph-ansible:reef", 2026-01-02 00:21:23.197425 | orchestrator | " Status: ✅ MATCH", 2026-01-02 00:21:23.197436 | orchestrator | "", 2026-01-02 00:21:23.197447 | orchestrator | "Checking service: kolla-ansible (Kolla-Ansible Service)", 2026-01-02 00:21:23.197458 | orchestrator | " Expected: registry.osism.tech/osism/kolla-ansible:2025.1", 2026-01-02 00:21:23.197469 | orchestrator | " Enabled: true", 2026-01-02 00:21:23.197480 | orchestrator | " Running: registry.osism.tech/osism/kolla-ansible:2025.1", 2026-01-02 00:21:23.197491 | orchestrator | " Status: ✅ MATCH", 2026-01-02 00:21:23.197503 | orchestrator | "", 2026-01-02 00:21:23.197516 | orchestrator | "Checking service: osismclient (OSISM Client)", 2026-01-02 00:21:23.197529 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-01-02 00:21:23.197543 | orchestrator | " Enabled: true", 2026-01-02 00:21:23.197555 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-01-02 00:21:23.197568 | orchestrator | " Status: ✅ MATCH", 2026-01-02 00:21:23.197581 | orchestrator | "", 2026-01-02 00:21:23.197594 | orchestrator | "Checking service: ara-server (ARA Server)", 2026-01-02 00:21:23.197607 | orchestrator | " Expected: registry.osism.tech/osism/ara-server:1.7.3", 2026-01-02 00:21:23.197620 | orchestrator | " Enabled: true", 2026-01-02 00:21:23.197633 | orchestrator | " Running: registry.osism.tech/osism/ara-server:1.7.3", 2026-01-02 00:21:23.197646 | orchestrator | " Status: ✅ MATCH", 2026-01-02 00:21:23.197658 | orchestrator | "", 2026-01-02 00:21:23.197670 | orchestrator | "Checking service: mariadb (MariaDB for ARA)", 2026-01-02 00:21:23.197692 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-01-02 00:21:23.197705 | orchestrator | " Enabled: true", 2026-01-02 00:21:23.197722 | orchestrator | " Running: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-01-02 00:21:23.197734 | orchestrator | " Status: ✅ MATCH", 2026-01-02 00:21:23.197749 | orchestrator | "", 2026-01-02 00:21:23.197761 | orchestrator | "Checking service: frontend (OSISM Frontend)", 2026-01-02 00:21:23.197774 | orchestrator | " Expected: registry.osism.tech/osism/osism-frontend:latest", 2026-01-02 00:21:23.197787 | orchestrator | " Enabled: true", 2026-01-02 00:21:23.197800 | orchestrator | " Running: registry.osism.tech/osism/osism-frontend:latest", 2026-01-02 00:21:23.197812 | orchestrator | " Status: ✅ MATCH", 2026-01-02 00:21:23.197825 | orchestrator | "", 2026-01-02 00:21:23.197837 | orchestrator | "Checking service: redis (Redis Cache)", 2026-01-02 00:21:23.197850 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-01-02 00:21:23.197863 | orchestrator | " Enabled: true", 2026-01-02 00:21:23.197875 | orchestrator | " Running: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-01-02 00:21:23.197888 | orchestrator | " Status: ✅ MATCH", 2026-01-02 00:21:23.197899 | orchestrator | "", 2026-01-02 00:21:23.197909 | orchestrator | "Checking service: api (OSISM API Service)", 2026-01-02 00:21:23.197920 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-01-02 00:21:23.197931 | orchestrator | " Enabled: true", 2026-01-02 00:21:23.197962 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-01-02 00:21:23.197974 | orchestrator | " Status: ✅ MATCH", 2026-01-02 00:21:23.197985 | orchestrator | "", 2026-01-02 00:21:23.197996 | orchestrator | "Checking service: listener (OpenStack Event Listener)", 2026-01-02 00:21:23.198007 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-01-02 00:21:23.198068 | orchestrator | " Enabled: true", 2026-01-02 00:21:23.198081 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-01-02 00:21:23.198092 | orchestrator | " Status: ✅ MATCH", 2026-01-02 00:21:23.198103 | orchestrator | "", 2026-01-02 00:21:23.198114 | orchestrator | "Checking service: openstack (OpenStack Integration)", 2026-01-02 00:21:23.198125 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-01-02 00:21:23.198136 | orchestrator | " Enabled: true", 2026-01-02 00:21:23.198147 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-01-02 00:21:23.198167 | orchestrator | " Status: ✅ MATCH", 2026-01-02 00:21:23.198178 | orchestrator | "", 2026-01-02 00:21:23.198189 | orchestrator | "Checking service: beat (Celery Beat Scheduler)", 2026-01-02 00:21:23.198200 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-01-02 00:21:23.198211 | orchestrator | " Enabled: true", 2026-01-02 00:21:23.198222 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-01-02 00:21:23.198233 | orchestrator | " Status: ✅ MATCH", 2026-01-02 00:21:23.198243 | orchestrator | "", 2026-01-02 00:21:23.198255 | orchestrator | "Checking service: flower (Celery Flower Monitor)", 2026-01-02 00:21:23.198284 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-01-02 00:21:23.198295 | orchestrator | " Enabled: true", 2026-01-02 00:21:23.198307 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-01-02 00:21:23.198318 | orchestrator | " Status: ✅ MATCH", 2026-01-02 00:21:23.198329 | orchestrator | "", 2026-01-02 00:21:23.198340 | orchestrator | "=== Summary ===", 2026-01-02 00:21:23.198351 | orchestrator | "Errors (version mismatches): 0", 2026-01-02 00:21:23.198362 | orchestrator | "Warnings (expected containers not running): 0", 2026-01-02 00:21:23.198373 | orchestrator | "", 2026-01-02 00:21:23.198384 | orchestrator | "✅ All running containers match expected versions!" 2026-01-02 00:21:23.198396 | orchestrator | ] 2026-01-02 00:21:23.198407 | orchestrator | } 2026-01-02 00:21:23.198419 | orchestrator | 2026-01-02 00:21:23.198430 | orchestrator | TASK [osism.services.manager : Skip version check due to service configuration] *** 2026-01-02 00:21:23.242257 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:21:23.242356 | orchestrator | 2026-01-02 00:21:23.242371 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-02 00:21:23.242387 | orchestrator | testbed-manager : ok=70 changed=37 unreachable=0 failed=0 skipped=13 rescued=0 ignored=0 2026-01-02 00:21:23.242399 | orchestrator | 2026-01-02 00:21:23.334235 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-01-02 00:21:23.334316 | orchestrator | + deactivate 2026-01-02 00:21:23.334327 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-01-02 00:21:23.334337 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-01-02 00:21:23.334344 | orchestrator | + export PATH 2026-01-02 00:21:23.334352 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-01-02 00:21:23.334359 | orchestrator | + '[' -n '' ']' 2026-01-02 00:21:23.334367 | orchestrator | + hash -r 2026-01-02 00:21:23.334374 | orchestrator | + '[' -n '' ']' 2026-01-02 00:21:23.334381 | orchestrator | + unset VIRTUAL_ENV 2026-01-02 00:21:23.334388 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-01-02 00:21:23.334395 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-01-02 00:21:23.334402 | orchestrator | + unset -f deactivate 2026-01-02 00:21:23.334410 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2026-01-02 00:21:23.342213 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-01-02 00:21:23.342261 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-01-02 00:21:23.342274 | orchestrator | + local max_attempts=60 2026-01-02 00:21:23.342286 | orchestrator | + local name=ceph-ansible 2026-01-02 00:21:23.342297 | orchestrator | + local attempt_num=1 2026-01-02 00:21:23.342732 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-02 00:21:23.372872 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-01-02 00:21:23.373002 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-01-02 00:21:23.373020 | orchestrator | + local max_attempts=60 2026-01-02 00:21:23.373032 | orchestrator | + local name=kolla-ansible 2026-01-02 00:21:23.373044 | orchestrator | + local attempt_num=1 2026-01-02 00:21:23.373265 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-01-02 00:21:23.402128 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-01-02 00:21:23.402257 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-01-02 00:21:23.402475 | orchestrator | + local max_attempts=60 2026-01-02 00:21:23.402504 | orchestrator | + local name=osism-ansible 2026-01-02 00:21:23.402523 | orchestrator | + local attempt_num=1 2026-01-02 00:21:23.402559 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-01-02 00:21:23.439727 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-01-02 00:21:23.439834 | orchestrator | + [[ true == \t\r\u\e ]] 2026-01-02 00:21:23.439848 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-01-02 00:21:24.166804 | orchestrator | + docker compose --project-directory /opt/manager ps 2026-01-02 00:21:24.342433 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2026-01-02 00:21:24.342543 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:reef "/entrypoint.sh osis…" ceph-ansible About a minute ago Up About a minute (healthy) 2026-01-02 00:21:24.342559 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:2025.1 "/entrypoint.sh osis…" kolla-ansible About a minute ago Up About a minute (healthy) 2026-01-02 00:21:24.342572 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" api About a minute ago Up About a minute (healthy) 192.168.16.5:8000->8000/tcp 2026-01-02 00:21:24.342586 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" ara-server About a minute ago Up About a minute (healthy) 8000/tcp 2026-01-02 00:21:24.342597 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" beat About a minute ago Up About a minute (healthy) 2026-01-02 00:21:24.342608 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" flower About a minute ago Up About a minute (healthy) 2026-01-02 00:21:24.342640 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" inventory_reconciler About a minute ago Up 56 seconds (healthy) 2026-01-02 00:21:24.342652 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" listener About a minute ago Up About a minute (healthy) 2026-01-02 00:21:24.342663 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" mariadb About a minute ago Up About a minute (healthy) 3306/tcp 2026-01-02 00:21:24.342674 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" openstack About a minute ago Up About a minute (healthy) 2026-01-02 00:21:24.342685 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" redis About a minute ago Up About a minute (healthy) 6379/tcp 2026-01-02 00:21:24.342696 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:latest "/entrypoint.sh osis…" osism-ansible About a minute ago Up About a minute (healthy) 2026-01-02 00:21:24.342707 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:latest "docker-entrypoint.s…" frontend About a minute ago Up About a minute 192.168.16.5:3000->3000/tcp 2026-01-02 00:21:24.342718 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:latest "/entrypoint.sh osis…" osism-kubernetes About a minute ago Up About a minute (healthy) 2026-01-02 00:21:24.342729 | orchestrator | osismclient registry.osism.tech/osism/osism:latest "/sbin/tini -- sleep…" osismclient About a minute ago Up About a minute (healthy) 2026-01-02 00:21:24.347492 | orchestrator | ++ semver latest 7.0.0 2026-01-02 00:21:24.399468 | orchestrator | + [[ -1 -ge 0 ]] 2026-01-02 00:21:24.399560 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-01-02 00:21:24.399576 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2026-01-02 00:21:24.403744 | orchestrator | + osism apply resolvconf -l testbed-manager 2026-01-02 00:21:36.686161 | orchestrator | 2026-01-02 00:21:36 | INFO  | Task bed45e7b-2f15-4178-a9e3-f156a33f6310 (resolvconf) was prepared for execution. 2026-01-02 00:21:36.686253 | orchestrator | 2026-01-02 00:21:36 | INFO  | It takes a moment until task bed45e7b-2f15-4178-a9e3-f156a33f6310 (resolvconf) has been started and output is visible here. 2026-01-02 00:21:49.043821 | orchestrator | 2026-01-02 00:21:49.043929 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2026-01-02 00:21:49.043970 | orchestrator | 2026-01-02 00:21:49.043982 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-02 00:21:49.043991 | orchestrator | Friday 02 January 2026 00:21:40 +0000 (0:00:00.101) 0:00:00.101 ******** 2026-01-02 00:21:49.044001 | orchestrator | ok: [testbed-manager] 2026-01-02 00:21:49.044012 | orchestrator | 2026-01-02 00:21:49.044021 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-01-02 00:21:49.044031 | orchestrator | Friday 02 January 2026 00:21:43 +0000 (0:00:03.310) 0:00:03.412 ******** 2026-01-02 00:21:49.044040 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:21:49.044050 | orchestrator | 2026-01-02 00:21:49.044059 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-01-02 00:21:49.044068 | orchestrator | Friday 02 January 2026 00:21:43 +0000 (0:00:00.058) 0:00:03.470 ******** 2026-01-02 00:21:49.044077 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2026-01-02 00:21:49.044087 | orchestrator | 2026-01-02 00:21:49.044096 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-01-02 00:21:49.044105 | orchestrator | Friday 02 January 2026 00:21:43 +0000 (0:00:00.070) 0:00:03.541 ******** 2026-01-02 00:21:49.044123 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2026-01-02 00:21:49.044132 | orchestrator | 2026-01-02 00:21:49.044141 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-01-02 00:21:49.044151 | orchestrator | Friday 02 January 2026 00:21:43 +0000 (0:00:00.065) 0:00:03.607 ******** 2026-01-02 00:21:49.044160 | orchestrator | ok: [testbed-manager] 2026-01-02 00:21:49.044169 | orchestrator | 2026-01-02 00:21:49.044177 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-01-02 00:21:49.044186 | orchestrator | Friday 02 January 2026 00:21:44 +0000 (0:00:00.870) 0:00:04.477 ******** 2026-01-02 00:21:49.044201 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:21:49.044216 | orchestrator | 2026-01-02 00:21:49.044231 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-01-02 00:21:49.044245 | orchestrator | Friday 02 January 2026 00:21:44 +0000 (0:00:00.054) 0:00:04.531 ******** 2026-01-02 00:21:49.044258 | orchestrator | ok: [testbed-manager] 2026-01-02 00:21:49.044272 | orchestrator | 2026-01-02 00:21:49.044287 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-01-02 00:21:49.044301 | orchestrator | Friday 02 January 2026 00:21:45 +0000 (0:00:00.482) 0:00:05.014 ******** 2026-01-02 00:21:49.044315 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:21:49.044329 | orchestrator | 2026-01-02 00:21:49.044343 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-01-02 00:21:49.044360 | orchestrator | Friday 02 January 2026 00:21:45 +0000 (0:00:00.078) 0:00:05.093 ******** 2026-01-02 00:21:49.044377 | orchestrator | changed: [testbed-manager] 2026-01-02 00:21:49.044391 | orchestrator | 2026-01-02 00:21:49.044406 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-01-02 00:21:49.044420 | orchestrator | Friday 02 January 2026 00:21:45 +0000 (0:00:00.542) 0:00:05.635 ******** 2026-01-02 00:21:49.044436 | orchestrator | changed: [testbed-manager] 2026-01-02 00:21:49.044476 | orchestrator | 2026-01-02 00:21:49.044492 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-01-02 00:21:49.044507 | orchestrator | Friday 02 January 2026 00:21:46 +0000 (0:00:01.022) 0:00:06.658 ******** 2026-01-02 00:21:49.044522 | orchestrator | ok: [testbed-manager] 2026-01-02 00:21:49.044538 | orchestrator | 2026-01-02 00:21:49.044553 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-01-02 00:21:49.044567 | orchestrator | Friday 02 January 2026 00:21:47 +0000 (0:00:00.929) 0:00:07.587 ******** 2026-01-02 00:21:49.044584 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2026-01-02 00:21:49.044599 | orchestrator | 2026-01-02 00:21:49.044613 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-01-02 00:21:49.044628 | orchestrator | Friday 02 January 2026 00:21:47 +0000 (0:00:00.075) 0:00:07.663 ******** 2026-01-02 00:21:49.044642 | orchestrator | changed: [testbed-manager] 2026-01-02 00:21:49.044658 | orchestrator | 2026-01-02 00:21:49.044674 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-02 00:21:49.044692 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-01-02 00:21:49.044707 | orchestrator | 2026-01-02 00:21:49.044722 | orchestrator | 2026-01-02 00:21:49.044737 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-02 00:21:49.044751 | orchestrator | Friday 02 January 2026 00:21:48 +0000 (0:00:01.101) 0:00:08.764 ******** 2026-01-02 00:21:49.044767 | orchestrator | =============================================================================== 2026-01-02 00:21:49.044781 | orchestrator | Gathering Facts --------------------------------------------------------- 3.31s 2026-01-02 00:21:49.044797 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.10s 2026-01-02 00:21:49.044818 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.02s 2026-01-02 00:21:49.044834 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 0.93s 2026-01-02 00:21:49.044844 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 0.87s 2026-01-02 00:21:49.044854 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.54s 2026-01-02 00:21:49.044893 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.48s 2026-01-02 00:21:49.044910 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.08s 2026-01-02 00:21:49.044926 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.08s 2026-01-02 00:21:49.044967 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.07s 2026-01-02 00:21:49.044983 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.07s 2026-01-02 00:21:49.044997 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.06s 2026-01-02 00:21:49.045006 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.05s 2026-01-02 00:21:49.307873 | orchestrator | + osism apply sshconfig 2026-01-02 00:22:01.300444 | orchestrator | 2026-01-02 00:22:01 | INFO  | Task 919f97aa-bd6f-4168-8981-11e34edd7ce6 (sshconfig) was prepared for execution. 2026-01-02 00:22:01.300523 | orchestrator | 2026-01-02 00:22:01 | INFO  | It takes a moment until task 919f97aa-bd6f-4168-8981-11e34edd7ce6 (sshconfig) has been started and output is visible here. 2026-01-02 00:22:11.571083 | orchestrator | 2026-01-02 00:22:11.571202 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2026-01-02 00:22:11.571220 | orchestrator | 2026-01-02 00:22:11.571232 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2026-01-02 00:22:11.571245 | orchestrator | Friday 02 January 2026 00:22:04 +0000 (0:00:00.114) 0:00:00.114 ******** 2026-01-02 00:22:11.571257 | orchestrator | ok: [testbed-manager] 2026-01-02 00:22:11.571297 | orchestrator | 2026-01-02 00:22:11.571310 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2026-01-02 00:22:11.571321 | orchestrator | Friday 02 January 2026 00:22:05 +0000 (0:00:00.476) 0:00:00.590 ******** 2026-01-02 00:22:11.571332 | orchestrator | changed: [testbed-manager] 2026-01-02 00:22:11.571344 | orchestrator | 2026-01-02 00:22:11.571355 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2026-01-02 00:22:11.571366 | orchestrator | Friday 02 January 2026 00:22:05 +0000 (0:00:00.398) 0:00:00.989 ******** 2026-01-02 00:22:11.571377 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2026-01-02 00:22:11.571388 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2026-01-02 00:22:11.571399 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2026-01-02 00:22:11.571410 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2026-01-02 00:22:11.571420 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2026-01-02 00:22:11.571431 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2026-01-02 00:22:11.571442 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2026-01-02 00:22:11.571453 | orchestrator | 2026-01-02 00:22:11.571464 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2026-01-02 00:22:11.571475 | orchestrator | Friday 02 January 2026 00:22:10 +0000 (0:00:04.893) 0:00:05.882 ******** 2026-01-02 00:22:11.571486 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:22:11.571497 | orchestrator | 2026-01-02 00:22:11.571508 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2026-01-02 00:22:11.571519 | orchestrator | Friday 02 January 2026 00:22:10 +0000 (0:00:00.068) 0:00:05.950 ******** 2026-01-02 00:22:11.571530 | orchestrator | changed: [testbed-manager] 2026-01-02 00:22:11.571541 | orchestrator | 2026-01-02 00:22:11.571552 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-02 00:22:11.571564 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-02 00:22:11.571576 | orchestrator | 2026-01-02 00:22:11.571603 | orchestrator | 2026-01-02 00:22:11.571625 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-02 00:22:11.571637 | orchestrator | Friday 02 January 2026 00:22:11 +0000 (0:00:00.547) 0:00:06.498 ******** 2026-01-02 00:22:11.571648 | orchestrator | =============================================================================== 2026-01-02 00:22:11.571659 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 4.89s 2026-01-02 00:22:11.571670 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.55s 2026-01-02 00:22:11.571681 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.48s 2026-01-02 00:22:11.571692 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.40s 2026-01-02 00:22:11.571703 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.07s 2026-01-02 00:22:11.857204 | orchestrator | + osism apply known-hosts 2026-01-02 00:22:23.831564 | orchestrator | 2026-01-02 00:22:23 | INFO  | Task 1c823b21-39e4-4ab4-b33d-287d7adc46a8 (known-hosts) was prepared for execution. 2026-01-02 00:22:23.831653 | orchestrator | 2026-01-02 00:22:23 | INFO  | It takes a moment until task 1c823b21-39e4-4ab4-b33d-287d7adc46a8 (known-hosts) has been started and output is visible here. 2026-01-02 00:22:39.389371 | orchestrator | 2026-01-02 00:22:39.390379 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2026-01-02 00:22:39.390418 | orchestrator | 2026-01-02 00:22:39.390432 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2026-01-02 00:22:39.390444 | orchestrator | Friday 02 January 2026 00:22:27 +0000 (0:00:00.145) 0:00:00.145 ******** 2026-01-02 00:22:39.390456 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-01-02 00:22:39.390468 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-01-02 00:22:39.390502 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-01-02 00:22:39.390513 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-01-02 00:22:39.390524 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-01-02 00:22:39.390536 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-01-02 00:22:39.390546 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-01-02 00:22:39.390557 | orchestrator | 2026-01-02 00:22:39.390569 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2026-01-02 00:22:39.390582 | orchestrator | Friday 02 January 2026 00:22:33 +0000 (0:00:05.491) 0:00:05.637 ******** 2026-01-02 00:22:39.390595 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-01-02 00:22:39.390620 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-01-02 00:22:39.390631 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-01-02 00:22:39.390643 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-01-02 00:22:39.390654 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-01-02 00:22:39.390665 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-01-02 00:22:39.390676 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-01-02 00:22:39.390687 | orchestrator | 2026-01-02 00:22:39.390698 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-02 00:22:39.390709 | orchestrator | Friday 02 January 2026 00:22:33 +0000 (0:00:00.152) 0:00:05.789 ******** 2026-01-02 00:22:39.390724 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC4HCJ3I4NVoquhr/KDATBOndi3ABVozRx47rSjnTEf4lkr2ayKX2YUrtRXZnRBJ0HS7LnIVyRPiV4e2vHp/pOez8rlSTMtskLNi5l+U35eFUlPr1iS71z7roysAjVlWZsbW/lnXTFDNV31n42UdT+bDZM6LlbAw2h4yWLbO+WJQEJ6Znz/bOmwDYv9QYxIxnh9L81BJ3QI6aKcsNwBsdVK33HvE95hYp0v+ItUvMPTG4Ql6d5flFoIRSsv7aNCq1y7Z1p3PRENm6EVfBvyntQTVTtVkuIzx/UliiA1kbAAgUTtx3rGXma0UpoHJjHtSvc7so01/V8h8Yrk4A2fC6RaPKpfdnqav7QlRE1FmHo+BsK2KY4mu+idUmYggD0R86jQxz2Uu3mOfhNFm2cLaNFWdKmoF8oNBCL6AMCwCQT7cbbsT9z2X9PuPXrjuW2Ta3Iq9OyKfummXjC2OnUPF3KlIG1P7b+Mw9CQwLodOnp5PRUGKMSu2/p0GuGujABSe38=) 2026-01-02 00:22:39.390744 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEjzI3ZqXvvDUhUvhoH9g1nh7ZtDufN0YrsELyXEoaLx+o7XPXYLT2V1WR+dSbdamyG+yh6V0Gh8a6n4OW+wlyY=) 2026-01-02 00:22:39.390758 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEAT71+/BKj+yU7831xln6CKadw0ZP5vbTBvEMP3Nwob) 2026-01-02 00:22:39.390770 | orchestrator | 2026-01-02 00:22:39.390782 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-02 00:22:39.390793 | orchestrator | Friday 02 January 2026 00:22:34 +0000 (0:00:01.127) 0:00:06.917 ******** 2026-01-02 00:22:39.390826 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDGnNcuz2sYyBiD8fpPp0Zxk1S6mCwSscfjjXwwfUZKLrgvt/pzsuv10YT6wOUQnnoHHu+kcn5OhTXUv79SAvOi5Qy6XPPBAr3Kl7en8FQAi2jW/TR6aZkPZtpnPtEepaTfAVYJyfbYK/W6+MPQgNCPRk6VGjnbbU7DWKOs/I1MGPQ3XFudbL0szQtbMoyGUjvStrTm4yooilqXdc9pImh35nTCP6+ChZHn9z16VHQISeQDa/XAsWt6WxrauD4L/+6X1F0Tb8UYb9cTmx29YNKduFjg8r4GiaccD3jDGjcecHAIw+1/rUhaQZNsBU7vv+oYRmavSSPQEZUlfi/BOT3WUi3DF4t/zM8qXUfdvNM32TdfhSz91Ho9F6apq9kjHlTAQkX8Tj2yYr9O9w+CIDMknO6LA0AAAlMwDNO7vheu1FO/+9IiboGckt74d425mQzq7dw7tYsDd7Q1YN8fSUdidPIe83T/sju2iyaOlq5YnZOffjjPpHtRb5cEGXINoMc=) 2026-01-02 00:22:39.390847 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBC9ssuWROCcFwms75w5S/t2vSzOw4HQLM5I+LockqnkhHXY0iAZ1c3UZL1FumjUjTzMUwMBQFGjlBUeDD4q76AA=) 2026-01-02 00:22:39.390859 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIdnB4pMvHe0ymT5svjy7ZXxAbpEZc4XADr8JfFoenao) 2026-01-02 00:22:39.390870 | orchestrator | 2026-01-02 00:22:39.390881 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-02 00:22:39.390893 | orchestrator | Friday 02 January 2026 00:22:35 +0000 (0:00:00.993) 0:00:07.910 ******** 2026-01-02 00:22:39.390904 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDnG3FdhmW0pKjO5s/5DsMMIqDbDfzhIjg71xDXsxqMFFUqQnR3Dtl7N4viSyx+/Sltb8p3ergl6iMZkwHqhD4NP0MhGZTk1+8tr8R5K8n9eyno2EobamAAS5FORadsnWk1ZDf0e2H5Zl5x0mw+U9bWo59gC5pcyks94fcbBTQ01KKRaMSO36r2l5SNVb5fmahbQlCk9NnelsDee1RHfCxj6c8i1npR72Zg2U62jn1siLgRVofK7qs6z7G5R+14Ht02+k6BRGj3jduvwnTQnxzWkvV2+FletAN1g3Brjs3XmSiSdXvGv4NjuUTlrZE3x77sZkXwpfkWozCaQd3z9ZtUQqi0UEQAJIDNPx/eVJQjjM7WOPDMui55E+CYY4qC/yZhJSTebQF768s+ODMsouoD6Imslb138USRmfMuBmxSKv7Z98zWUYpwJ5O+F/Xn5BZOln336Kc7I8dvlaC09BLna5AQ1/V5MihxV95p/hTYm/aYBJSUI+RvxmB150NsbIU=) 2026-01-02 00:22:39.390916 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDp2hLKHZFeRxe6Jp5itr09No+UKfQVFGkcxwUQfbwE6) 2026-01-02 00:22:39.390927 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLQErnHnCkdjH8CD49BVkGZrbU2pPZmBXsFqXpFandve29q0pzJNF++Re7zS+9UwpQ6gtnciDzJdlw7eVslyr0g=) 2026-01-02 00:22:39.390959 | orchestrator | 2026-01-02 00:22:39.390971 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-02 00:22:39.391052 | orchestrator | Friday 02 January 2026 00:22:36 +0000 (0:00:01.000) 0:00:08.910 ******** 2026-01-02 00:22:39.391064 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBH+m/IRyVR7wJXEOA9UDbNpN298Fefj+ts8YgpnCENVpVIs77+Xsu8VXkDjHeRl3dDdXh7QqePUgktNJCvgdlqI=) 2026-01-02 00:22:39.391076 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCk8FGHp3IpMJnBZ8aQm3Md4mnzCrFKAb8bqSGXrwKJOuQkp+nfQDb3oVYSG23oeMO48MFbUPNzE1rOjws2JHLKn/Uy60PCVLRb18zlA0sBpNFg187+WK+Uc0/fgaj3aLuJLlZuFUeUbDr3O72q/0x8yareFuFKxUzvVkq1I8GWjiiHWFADsYRl5n2B8Ai3ho5+ss/XLpDg9quqC8AlGcS0tvpd0O/xNmhPdRC28sz09vgDKJzQ9Qf3TPyTZ4SviVFUYSxoNLHPwGPzvg6RqqJ6GzJ6x+5lB72JkrYasfXbA3AC8z30mfl8xm/40Ys1h8oIGXxZRd4QQ8/tnJFRD5FgtIREjXBNfzpPr3LoNzYPulqxWsIDw7vgrxAkE5fC3ab0JKV3qX/W5N2+jvkdJWlSp7vBYbQhr197aT9coPtfWope5kr2eII+9ahjGh1ITyATGznAc5hJ1SJKahx7VXdUlsY9wFnXqy4NdVOwcB/iawp0AlSz29/ChBF/TmfhJGs=) 2026-01-02 00:22:39.391088 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOzBuIuHn4Fks3GmpAPnlcU9m3xgg2QnoNZ9Wnyvv6AB) 2026-01-02 00:22:39.391099 | orchestrator | 2026-01-02 00:22:39.391110 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-02 00:22:39.391121 | orchestrator | Friday 02 January 2026 00:22:37 +0000 (0:00:01.004) 0:00:09.915 ******** 2026-01-02 00:22:39.391133 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQD4aalemRaKq/rPo7fArdqyj6YQyPkDwxlrJVtIkmJtRdKtOaMkBDADpTtbDktHiZw0td5xqR9bcpYuueUdIB/2icT21iFFKlX5fSIZ7eKtdgWRgvf5VsOmtUjDRN4qVYyQOVmdQDl5IYD+lo2admF4Ij8pC3TM6MH9FeOLompfOV8veHBLqd2yad5pK7EsxeZp3xNeBIzX94MDtMHeMey/vG3pvPD34bccX4OppCaCGgVPGZJw0OESoqr6aqoBSSK6PeaUazoyHlvokeGneoM9lGL+rMF2/IZ4mCdrdp5Mcoo1tTye3ylDcjkNA7BpTxfVTD/urHL1MLKen6tBmyY32FEwQ+1kzArE7DehLAHzE6QYavdkD1nNDFWYAOWva9DZ3/w5jk7LnasSc5nK5EEne9LRcBZxhCdhltYQ69/527UtXVdfWuGR257Xi2jNaNAsW2oGr/Xkn8iqqsw3Jyz/TIhz8ekblaGEZK0IFxXPnxbrg1XXsC5mSuRSWJSHNVk=) 2026-01-02 00:22:39.391151 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDbknlGOA/Uy0tbGmY3L60RM1mm/MEkds9c03Kd6i5MsIDixCPWm3E4HkeWXy9H1XhhIR8sPq6xMSlOk7fXse2I=) 2026-01-02 00:22:39.391162 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICTA1TQD2PBOdBklfq/icrNrVTbiGc9Ted1SvnhRja4J) 2026-01-02 00:22:39.391173 | orchestrator | 2026-01-02 00:22:39.391185 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-02 00:22:39.391195 | orchestrator | Friday 02 January 2026 00:22:38 +0000 (0:00:01.029) 0:00:10.944 ******** 2026-01-02 00:22:39.391214 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEBnD/sjOARGwH9TuK217wSRAyvemaksOKwIV9XA5xUFfVquQ+jgRUOPG1kX/3PXBQmKlM5PY/jFsWbgEWsCjAo=) 2026-01-02 00:22:50.672603 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDj8xEBDlzFQNk5GsrSxBMXFnrj+E0o6PUegiyuqArP9hDR/c4e1jUroua9dbQNdq4iTCgiLcKdb65/C/uDCaHmSXMhQjFENdfaDPMkp4FjlIulb54lV+MX44F30LyFiUpnjwnzMXtzP7eBB6kMplX8cChjJtQtBf13mVb8t8WfULSgVw0GIkgpEANnrZtNSM3hunumo0ybV7O49iC2HhN7lgOh42dZqbf9CLB69EScGQoKX0/uNg2Uj3p9JAC1mUTg5zkX/bTOpNsqQygH8j6n6g4IgUjzYHAdRb8AJmWLNALi+6Cew6gf16tTSEsTXIw4LvbrP7BFhSCAx+DzbhKUQAB8c8oCIFB4tENAqzLOmXasdtHc0LLuPWiW2JsgLiuEhng2oSXRh6B+jaWniXKyfoytdckvbpGyhGZC+zKCIp9AzrlQTWQ+W2EF29g/8JLCxYaYsZd1HyT2OloRuskQJJllLlRysj7iQV4XJPF3k5O9LomVi0ORRQNsX7cfb90=) 2026-01-02 00:22:50.672726 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIES3B2VNJEdfElGq9+5w9gsiIFaa90ZhpSvHweRsoOdK) 2026-01-02 00:22:50.672744 | orchestrator | 2026-01-02 00:22:50.672758 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-02 00:22:50.672772 | orchestrator | Friday 02 January 2026 00:22:39 +0000 (0:00:00.990) 0:00:11.934 ******** 2026-01-02 00:22:50.672784 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCYzbPBESN/bFbx88xoxBDaRVoM9IeesIpgX/hdQFb3ddro4y2xX0+tIZuB2ZebnhYulvL299V96LXYM9tjyOy1BU8zYilYznpQUaK0sivRMHE2aREDtJTBZxMJk4/nMg8unZDdfRDfPGudM8P9+pnaDToeSq9zC7CKPOVqHaNktCZl/WC6VuDDWXOhFG1H1frxr9LJ/yGCXkXc9yntBqAluEQuhz9Uwqs7BCYT0EbwGJBdQMQbZilDTUSFda/BNXJ3aFElTgHIvFS52X3Ix9uDRe7F5xAvoWTlChw25n7ahMG/aQXGFB20ZpvHMqgxOcSKq77BxeUPPxB9UVn102oPhua0zV2nc4K72NnVXlEgCkaYI+NiY/bF+PyqPJuMHLTGdcnM9090v8JUwOcvMlbGf7/QPhNi3FFj/2rYDjHVGA4HWaNLDQp3QP12ptF7ltBrqHOelyRCvM0hQYn81yvarcaNC4EtRnP4fInE7tr29jJIfz8B2Siwp7oel5l9N70=) 2026-01-02 00:22:50.672796 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLdz31z2sIzVMb6+2/5+bJ7E52zIS3BzAh2+EDRZH+hPz9Vm17sEA6QSUr4nhb71v6KTZM1+kwDxr9/WUKQ/taY=) 2026-01-02 00:22:50.672810 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHDIv6PZkD4Odhm6Uco6wqDK0QS5jjULUrFw6Yx5KKEb) 2026-01-02 00:22:50.672821 | orchestrator | 2026-01-02 00:22:50.672833 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2026-01-02 00:22:50.672846 | orchestrator | Friday 02 January 2026 00:22:41 +0000 (0:00:01.999) 0:00:13.934 ******** 2026-01-02 00:22:50.672857 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-01-02 00:22:50.672869 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-01-02 00:22:50.672880 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-01-02 00:22:50.672891 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-01-02 00:22:50.672902 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-01-02 00:22:50.672913 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-01-02 00:22:50.672994 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-01-02 00:22:50.673008 | orchestrator | 2026-01-02 00:22:50.673019 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2026-01-02 00:22:50.673049 | orchestrator | Friday 02 January 2026 00:22:46 +0000 (0:00:05.086) 0:00:19.020 ******** 2026-01-02 00:22:50.673062 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-01-02 00:22:50.673074 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-01-02 00:22:50.673086 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-01-02 00:22:50.673097 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-01-02 00:22:50.673108 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-01-02 00:22:50.673119 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-01-02 00:22:50.673130 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-01-02 00:22:50.673143 | orchestrator | 2026-01-02 00:22:50.673174 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-02 00:22:50.673188 | orchestrator | Friday 02 January 2026 00:22:46 +0000 (0:00:00.163) 0:00:19.184 ******** 2026-01-02 00:22:50.673201 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEjzI3ZqXvvDUhUvhoH9g1nh7ZtDufN0YrsELyXEoaLx+o7XPXYLT2V1WR+dSbdamyG+yh6V0Gh8a6n4OW+wlyY=) 2026-01-02 00:22:50.673215 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC4HCJ3I4NVoquhr/KDATBOndi3ABVozRx47rSjnTEf4lkr2ayKX2YUrtRXZnRBJ0HS7LnIVyRPiV4e2vHp/pOez8rlSTMtskLNi5l+U35eFUlPr1iS71z7roysAjVlWZsbW/lnXTFDNV31n42UdT+bDZM6LlbAw2h4yWLbO+WJQEJ6Znz/bOmwDYv9QYxIxnh9L81BJ3QI6aKcsNwBsdVK33HvE95hYp0v+ItUvMPTG4Ql6d5flFoIRSsv7aNCq1y7Z1p3PRENm6EVfBvyntQTVTtVkuIzx/UliiA1kbAAgUTtx3rGXma0UpoHJjHtSvc7so01/V8h8Yrk4A2fC6RaPKpfdnqav7QlRE1FmHo+BsK2KY4mu+idUmYggD0R86jQxz2Uu3mOfhNFm2cLaNFWdKmoF8oNBCL6AMCwCQT7cbbsT9z2X9PuPXrjuW2Ta3Iq9OyKfummXjC2OnUPF3KlIG1P7b+Mw9CQwLodOnp5PRUGKMSu2/p0GuGujABSe38=) 2026-01-02 00:22:50.673229 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEAT71+/BKj+yU7831xln6CKadw0ZP5vbTBvEMP3Nwob) 2026-01-02 00:22:50.673241 | orchestrator | 2026-01-02 00:22:50.673254 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-02 00:22:50.673267 | orchestrator | Friday 02 January 2026 00:22:47 +0000 (0:00:00.990) 0:00:20.175 ******** 2026-01-02 00:22:50.673280 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDGnNcuz2sYyBiD8fpPp0Zxk1S6mCwSscfjjXwwfUZKLrgvt/pzsuv10YT6wOUQnnoHHu+kcn5OhTXUv79SAvOi5Qy6XPPBAr3Kl7en8FQAi2jW/TR6aZkPZtpnPtEepaTfAVYJyfbYK/W6+MPQgNCPRk6VGjnbbU7DWKOs/I1MGPQ3XFudbL0szQtbMoyGUjvStrTm4yooilqXdc9pImh35nTCP6+ChZHn9z16VHQISeQDa/XAsWt6WxrauD4L/+6X1F0Tb8UYb9cTmx29YNKduFjg8r4GiaccD3jDGjcecHAIw+1/rUhaQZNsBU7vv+oYRmavSSPQEZUlfi/BOT3WUi3DF4t/zM8qXUfdvNM32TdfhSz91Ho9F6apq9kjHlTAQkX8Tj2yYr9O9w+CIDMknO6LA0AAAlMwDNO7vheu1FO/+9IiboGckt74d425mQzq7dw7tYsDd7Q1YN8fSUdidPIe83T/sju2iyaOlq5YnZOffjjPpHtRb5cEGXINoMc=) 2026-01-02 00:22:50.673302 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBC9ssuWROCcFwms75w5S/t2vSzOw4HQLM5I+LockqnkhHXY0iAZ1c3UZL1FumjUjTzMUwMBQFGjlBUeDD4q76AA=) 2026-01-02 00:22:50.673315 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIdnB4pMvHe0ymT5svjy7ZXxAbpEZc4XADr8JfFoenao) 2026-01-02 00:22:50.673328 | orchestrator | 2026-01-02 00:22:50.673341 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-02 00:22:50.673355 | orchestrator | Friday 02 January 2026 00:22:48 +0000 (0:00:00.986) 0:00:21.161 ******** 2026-01-02 00:22:50.673368 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLQErnHnCkdjH8CD49BVkGZrbU2pPZmBXsFqXpFandve29q0pzJNF++Re7zS+9UwpQ6gtnciDzJdlw7eVslyr0g=) 2026-01-02 00:22:50.673381 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDp2hLKHZFeRxe6Jp5itr09No+UKfQVFGkcxwUQfbwE6) 2026-01-02 00:22:50.673394 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDnG3FdhmW0pKjO5s/5DsMMIqDbDfzhIjg71xDXsxqMFFUqQnR3Dtl7N4viSyx+/Sltb8p3ergl6iMZkwHqhD4NP0MhGZTk1+8tr8R5K8n9eyno2EobamAAS5FORadsnWk1ZDf0e2H5Zl5x0mw+U9bWo59gC5pcyks94fcbBTQ01KKRaMSO36r2l5SNVb5fmahbQlCk9NnelsDee1RHfCxj6c8i1npR72Zg2U62jn1siLgRVofK7qs6z7G5R+14Ht02+k6BRGj3jduvwnTQnxzWkvV2+FletAN1g3Brjs3XmSiSdXvGv4NjuUTlrZE3x77sZkXwpfkWozCaQd3z9ZtUQqi0UEQAJIDNPx/eVJQjjM7WOPDMui55E+CYY4qC/yZhJSTebQF768s+ODMsouoD6Imslb138USRmfMuBmxSKv7Z98zWUYpwJ5O+F/Xn5BZOln336Kc7I8dvlaC09BLna5AQ1/V5MihxV95p/hTYm/aYBJSUI+RvxmB150NsbIU=) 2026-01-02 00:22:50.673408 | orchestrator | 2026-01-02 00:22:50.673420 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-02 00:22:50.673434 | orchestrator | Friday 02 January 2026 00:22:49 +0000 (0:00:01.023) 0:00:22.184 ******** 2026-01-02 00:22:50.673447 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBH+m/IRyVR7wJXEOA9UDbNpN298Fefj+ts8YgpnCENVpVIs77+Xsu8VXkDjHeRl3dDdXh7QqePUgktNJCvgdlqI=) 2026-01-02 00:22:50.673483 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCk8FGHp3IpMJnBZ8aQm3Md4mnzCrFKAb8bqSGXrwKJOuQkp+nfQDb3oVYSG23oeMO48MFbUPNzE1rOjws2JHLKn/Uy60PCVLRb18zlA0sBpNFg187+WK+Uc0/fgaj3aLuJLlZuFUeUbDr3O72q/0x8yareFuFKxUzvVkq1I8GWjiiHWFADsYRl5n2B8Ai3ho5+ss/XLpDg9quqC8AlGcS0tvpd0O/xNmhPdRC28sz09vgDKJzQ9Qf3TPyTZ4SviVFUYSxoNLHPwGPzvg6RqqJ6GzJ6x+5lB72JkrYasfXbA3AC8z30mfl8xm/40Ys1h8oIGXxZRd4QQ8/tnJFRD5FgtIREjXBNfzpPr3LoNzYPulqxWsIDw7vgrxAkE5fC3ab0JKV3qX/W5N2+jvkdJWlSp7vBYbQhr197aT9coPtfWope5kr2eII+9ahjGh1ITyATGznAc5hJ1SJKahx7VXdUlsY9wFnXqy4NdVOwcB/iawp0AlSz29/ChBF/TmfhJGs=) 2026-01-02 00:22:54.844794 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOzBuIuHn4Fks3GmpAPnlcU9m3xgg2QnoNZ9Wnyvv6AB) 2026-01-02 00:22:54.844905 | orchestrator | 2026-01-02 00:22:54.844923 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-02 00:22:54.844994 | orchestrator | Friday 02 January 2026 00:22:50 +0000 (0:00:01.033) 0:00:23.218 ******** 2026-01-02 00:22:54.845026 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQD4aalemRaKq/rPo7fArdqyj6YQyPkDwxlrJVtIkmJtRdKtOaMkBDADpTtbDktHiZw0td5xqR9bcpYuueUdIB/2icT21iFFKlX5fSIZ7eKtdgWRgvf5VsOmtUjDRN4qVYyQOVmdQDl5IYD+lo2admF4Ij8pC3TM6MH9FeOLompfOV8veHBLqd2yad5pK7EsxeZp3xNeBIzX94MDtMHeMey/vG3pvPD34bccX4OppCaCGgVPGZJw0OESoqr6aqoBSSK6PeaUazoyHlvokeGneoM9lGL+rMF2/IZ4mCdrdp5Mcoo1tTye3ylDcjkNA7BpTxfVTD/urHL1MLKen6tBmyY32FEwQ+1kzArE7DehLAHzE6QYavdkD1nNDFWYAOWva9DZ3/w5jk7LnasSc5nK5EEne9LRcBZxhCdhltYQ69/527UtXVdfWuGR257Xi2jNaNAsW2oGr/Xkn8iqqsw3Jyz/TIhz8ekblaGEZK0IFxXPnxbrg1XXsC5mSuRSWJSHNVk=) 2026-01-02 00:22:54.845042 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDbknlGOA/Uy0tbGmY3L60RM1mm/MEkds9c03Kd6i5MsIDixCPWm3E4HkeWXy9H1XhhIR8sPq6xMSlOk7fXse2I=) 2026-01-02 00:22:54.845081 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICTA1TQD2PBOdBklfq/icrNrVTbiGc9Ted1SvnhRja4J) 2026-01-02 00:22:54.845093 | orchestrator | 2026-01-02 00:22:54.845105 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-02 00:22:54.845116 | orchestrator | Friday 02 January 2026 00:22:51 +0000 (0:00:01.046) 0:00:24.264 ******** 2026-01-02 00:22:54.845127 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDj8xEBDlzFQNk5GsrSxBMXFnrj+E0o6PUegiyuqArP9hDR/c4e1jUroua9dbQNdq4iTCgiLcKdb65/C/uDCaHmSXMhQjFENdfaDPMkp4FjlIulb54lV+MX44F30LyFiUpnjwnzMXtzP7eBB6kMplX8cChjJtQtBf13mVb8t8WfULSgVw0GIkgpEANnrZtNSM3hunumo0ybV7O49iC2HhN7lgOh42dZqbf9CLB69EScGQoKX0/uNg2Uj3p9JAC1mUTg5zkX/bTOpNsqQygH8j6n6g4IgUjzYHAdRb8AJmWLNALi+6Cew6gf16tTSEsTXIw4LvbrP7BFhSCAx+DzbhKUQAB8c8oCIFB4tENAqzLOmXasdtHc0LLuPWiW2JsgLiuEhng2oSXRh6B+jaWniXKyfoytdckvbpGyhGZC+zKCIp9AzrlQTWQ+W2EF29g/8JLCxYaYsZd1HyT2OloRuskQJJllLlRysj7iQV4XJPF3k5O9LomVi0ORRQNsX7cfb90=) 2026-01-02 00:22:54.845139 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEBnD/sjOARGwH9TuK217wSRAyvemaksOKwIV9XA5xUFfVquQ+jgRUOPG1kX/3PXBQmKlM5PY/jFsWbgEWsCjAo=) 2026-01-02 00:22:54.845151 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIES3B2VNJEdfElGq9+5w9gsiIFaa90ZhpSvHweRsoOdK) 2026-01-02 00:22:54.845162 | orchestrator | 2026-01-02 00:22:54.845172 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-02 00:22:54.845183 | orchestrator | Friday 02 January 2026 00:22:52 +0000 (0:00:00.999) 0:00:25.264 ******** 2026-01-02 00:22:54.845194 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCYzbPBESN/bFbx88xoxBDaRVoM9IeesIpgX/hdQFb3ddro4y2xX0+tIZuB2ZebnhYulvL299V96LXYM9tjyOy1BU8zYilYznpQUaK0sivRMHE2aREDtJTBZxMJk4/nMg8unZDdfRDfPGudM8P9+pnaDToeSq9zC7CKPOVqHaNktCZl/WC6VuDDWXOhFG1H1frxr9LJ/yGCXkXc9yntBqAluEQuhz9Uwqs7BCYT0EbwGJBdQMQbZilDTUSFda/BNXJ3aFElTgHIvFS52X3Ix9uDRe7F5xAvoWTlChw25n7ahMG/aQXGFB20ZpvHMqgxOcSKq77BxeUPPxB9UVn102oPhua0zV2nc4K72NnVXlEgCkaYI+NiY/bF+PyqPJuMHLTGdcnM9090v8JUwOcvMlbGf7/QPhNi3FFj/2rYDjHVGA4HWaNLDQp3QP12ptF7ltBrqHOelyRCvM0hQYn81yvarcaNC4EtRnP4fInE7tr29jJIfz8B2Siwp7oel5l9N70=) 2026-01-02 00:22:54.845206 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLdz31z2sIzVMb6+2/5+bJ7E52zIS3BzAh2+EDRZH+hPz9Vm17sEA6QSUr4nhb71v6KTZM1+kwDxr9/WUKQ/taY=) 2026-01-02 00:22:54.845217 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHDIv6PZkD4Odhm6Uco6wqDK0QS5jjULUrFw6Yx5KKEb) 2026-01-02 00:22:54.845228 | orchestrator | 2026-01-02 00:22:54.845239 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2026-01-02 00:22:54.845249 | orchestrator | Friday 02 January 2026 00:22:53 +0000 (0:00:01.001) 0:00:26.265 ******** 2026-01-02 00:22:54.845261 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-01-02 00:22:54.845272 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-01-02 00:22:54.845283 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-01-02 00:22:54.845294 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-01-02 00:22:54.845305 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-01-02 00:22:54.845335 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-01-02 00:22:54.845349 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-01-02 00:22:54.845362 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:22:54.845377 | orchestrator | 2026-01-02 00:22:54.845391 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2026-01-02 00:22:54.845412 | orchestrator | Friday 02 January 2026 00:22:53 +0000 (0:00:00.169) 0:00:26.435 ******** 2026-01-02 00:22:54.845425 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:22:54.845437 | orchestrator | 2026-01-02 00:22:54.845450 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2026-01-02 00:22:54.845462 | orchestrator | Friday 02 January 2026 00:22:53 +0000 (0:00:00.051) 0:00:26.487 ******** 2026-01-02 00:22:54.845476 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:22:54.845488 | orchestrator | 2026-01-02 00:22:54.845501 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2026-01-02 00:22:54.845513 | orchestrator | Friday 02 January 2026 00:22:53 +0000 (0:00:00.053) 0:00:26.540 ******** 2026-01-02 00:22:54.845526 | orchestrator | changed: [testbed-manager] 2026-01-02 00:22:54.845540 | orchestrator | 2026-01-02 00:22:54.845554 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-02 00:22:54.845567 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-01-02 00:22:54.845581 | orchestrator | 2026-01-02 00:22:54.845594 | orchestrator | 2026-01-02 00:22:54.845607 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-02 00:22:54.845620 | orchestrator | Friday 02 January 2026 00:22:54 +0000 (0:00:00.671) 0:00:27.212 ******** 2026-01-02 00:22:54.845632 | orchestrator | =============================================================================== 2026-01-02 00:22:54.845645 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 5.49s 2026-01-02 00:22:54.845659 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.09s 2026-01-02 00:22:54.845672 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 2.00s 2026-01-02 00:22:54.845685 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.13s 2026-01-02 00:22:54.845696 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.05s 2026-01-02 00:22:54.845706 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2026-01-02 00:22:54.845717 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2026-01-02 00:22:54.845728 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2026-01-02 00:22:54.845738 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.00s 2026-01-02 00:22:54.845749 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.00s 2026-01-02 00:22:54.845760 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.00s 2026-01-02 00:22:54.845770 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.00s 2026-01-02 00:22:54.845781 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.99s 2026-01-02 00:22:54.845792 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.99s 2026-01-02 00:22:54.845809 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.99s 2026-01-02 00:22:54.845820 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.99s 2026-01-02 00:22:54.845831 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.67s 2026-01-02 00:22:54.845842 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.17s 2026-01-02 00:22:54.845852 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.16s 2026-01-02 00:22:54.845863 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.15s 2026-01-02 00:22:55.115386 | orchestrator | + osism apply squid 2026-01-02 00:23:07.242570 | orchestrator | 2026-01-02 00:23:07 | INFO  | Task b3141476-c76c-4aa4-9021-71400a1000e3 (squid) was prepared for execution. 2026-01-02 00:23:07.242688 | orchestrator | 2026-01-02 00:23:07 | INFO  | It takes a moment until task b3141476-c76c-4aa4-9021-71400a1000e3 (squid) has been started and output is visible here. 2026-01-02 00:24:58.063402 | orchestrator | 2026-01-02 00:24:58.063524 | orchestrator | PLAY [Apply role squid] ******************************************************** 2026-01-02 00:24:58.063544 | orchestrator | 2026-01-02 00:24:58.063556 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2026-01-02 00:24:58.063569 | orchestrator | Friday 02 January 2026 00:23:10 +0000 (0:00:00.117) 0:00:00.117 ******** 2026-01-02 00:24:58.063581 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2026-01-02 00:24:58.063594 | orchestrator | 2026-01-02 00:24:58.063605 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2026-01-02 00:24:58.063616 | orchestrator | Friday 02 January 2026 00:23:11 +0000 (0:00:00.073) 0:00:00.190 ******** 2026-01-02 00:24:58.063628 | orchestrator | ok: [testbed-manager] 2026-01-02 00:24:58.063641 | orchestrator | 2026-01-02 00:24:58.063653 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2026-01-02 00:24:58.063664 | orchestrator | Friday 02 January 2026 00:23:12 +0000 (0:00:00.992) 0:00:01.183 ******** 2026-01-02 00:24:58.063676 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2026-01-02 00:24:58.063687 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2026-01-02 00:24:58.063698 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2026-01-02 00:24:58.063710 | orchestrator | 2026-01-02 00:24:58.063721 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2026-01-02 00:24:58.063732 | orchestrator | Friday 02 January 2026 00:23:13 +0000 (0:00:01.035) 0:00:02.219 ******** 2026-01-02 00:24:58.063744 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2026-01-02 00:24:58.063755 | orchestrator | 2026-01-02 00:24:58.063766 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2026-01-02 00:24:58.063795 | orchestrator | Friday 02 January 2026 00:23:13 +0000 (0:00:00.925) 0:00:03.144 ******** 2026-01-02 00:24:58.063806 | orchestrator | ok: [testbed-manager] 2026-01-02 00:24:58.063818 | orchestrator | 2026-01-02 00:24:58.063829 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2026-01-02 00:24:58.063840 | orchestrator | Friday 02 January 2026 00:23:14 +0000 (0:00:00.326) 0:00:03.471 ******** 2026-01-02 00:24:58.063852 | orchestrator | changed: [testbed-manager] 2026-01-02 00:24:58.063863 | orchestrator | 2026-01-02 00:24:58.063874 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2026-01-02 00:24:58.063885 | orchestrator | Friday 02 January 2026 00:23:15 +0000 (0:00:00.813) 0:00:04.285 ******** 2026-01-02 00:24:58.063896 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2026-01-02 00:24:58.063908 | orchestrator | ok: [testbed-manager] 2026-01-02 00:24:58.063919 | orchestrator | 2026-01-02 00:24:58.063996 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2026-01-02 00:24:58.064010 | orchestrator | Friday 02 January 2026 00:23:45 +0000 (0:00:30.018) 0:00:34.303 ******** 2026-01-02 00:24:58.064024 | orchestrator | changed: [testbed-manager] 2026-01-02 00:24:58.064038 | orchestrator | 2026-01-02 00:24:58.064051 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2026-01-02 00:24:58.064064 | orchestrator | Friday 02 January 2026 00:23:57 +0000 (0:00:11.938) 0:00:46.241 ******** 2026-01-02 00:24:58.064077 | orchestrator | Pausing for 60 seconds 2026-01-02 00:24:58.064091 | orchestrator | changed: [testbed-manager] 2026-01-02 00:24:58.064103 | orchestrator | 2026-01-02 00:24:58.064117 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2026-01-02 00:24:58.064130 | orchestrator | Friday 02 January 2026 00:24:57 +0000 (0:01:00.080) 0:01:46.322 ******** 2026-01-02 00:24:58.064143 | orchestrator | ok: [testbed-manager] 2026-01-02 00:24:58.064156 | orchestrator | 2026-01-02 00:24:58.064170 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2026-01-02 00:24:58.064183 | orchestrator | Friday 02 January 2026 00:24:57 +0000 (0:00:00.057) 0:01:46.379 ******** 2026-01-02 00:24:58.064217 | orchestrator | changed: [testbed-manager] 2026-01-02 00:24:58.064231 | orchestrator | 2026-01-02 00:24:58.064244 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-02 00:24:58.064257 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-02 00:24:58.064270 | orchestrator | 2026-01-02 00:24:58.064284 | orchestrator | 2026-01-02 00:24:58.064297 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-02 00:24:58.064311 | orchestrator | Friday 02 January 2026 00:24:57 +0000 (0:00:00.602) 0:01:46.981 ******** 2026-01-02 00:24:58.064325 | orchestrator | =============================================================================== 2026-01-02 00:24:58.064336 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.08s 2026-01-02 00:24:58.064347 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 30.02s 2026-01-02 00:24:58.064358 | orchestrator | osism.services.squid : Restart squid service --------------------------- 11.94s 2026-01-02 00:24:58.064369 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.04s 2026-01-02 00:24:58.064380 | orchestrator | osism.services.squid : Install required packages ------------------------ 0.99s 2026-01-02 00:24:58.064391 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 0.93s 2026-01-02 00:24:58.064402 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.81s 2026-01-02 00:24:58.064413 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.60s 2026-01-02 00:24:58.064424 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.33s 2026-01-02 00:24:58.064435 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.07s 2026-01-02 00:24:58.064446 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.06s 2026-01-02 00:24:58.358810 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-01-02 00:24:58.358912 | orchestrator | + /opt/configuration/scripts/set-kolla-namespace.sh kolla 2026-01-02 00:24:58.364451 | orchestrator | + set -e 2026-01-02 00:24:58.364487 | orchestrator | + NAMESPACE=kolla 2026-01-02 00:24:58.364501 | orchestrator | + sed -i 's#docker_namespace: .*#docker_namespace: kolla#g' /opt/configuration/inventory/group_vars/all/kolla.yml 2026-01-02 00:24:58.368640 | orchestrator | ++ semver latest 9.0.0 2026-01-02 00:24:58.420613 | orchestrator | + [[ -1 -lt 0 ]] 2026-01-02 00:24:58.420681 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-01-02 00:24:58.421442 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2026-01-02 00:25:10.563450 | orchestrator | 2026-01-02 00:25:10 | INFO  | Task fee7a030-3c85-4397-aed1-000e73da2266 (operator) was prepared for execution. 2026-01-02 00:25:10.563549 | orchestrator | 2026-01-02 00:25:10 | INFO  | It takes a moment until task fee7a030-3c85-4397-aed1-000e73da2266 (operator) has been started and output is visible here. 2026-01-02 00:25:25.490999 | orchestrator | 2026-01-02 00:25:25.491123 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2026-01-02 00:25:25.491142 | orchestrator | 2026-01-02 00:25:25.491155 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-02 00:25:25.491167 | orchestrator | Friday 02 January 2026 00:25:14 +0000 (0:00:00.100) 0:00:00.100 ******** 2026-01-02 00:25:25.491178 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:25:25.491193 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:25:25.491205 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:25:25.491216 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:25:25.491227 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:25:25.491238 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:25:25.491253 | orchestrator | 2026-01-02 00:25:25.491265 | orchestrator | TASK [Do not require tty for all users] **************************************** 2026-01-02 00:25:25.491277 | orchestrator | Friday 02 January 2026 00:25:17 +0000 (0:00:03.132) 0:00:03.233 ******** 2026-01-02 00:25:25.491288 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:25:25.491322 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:25:25.491334 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:25:25.491345 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:25:25.491356 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:25:25.491367 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:25:25.491378 | orchestrator | 2026-01-02 00:25:25.491389 | orchestrator | PLAY [Apply role operator] ***************************************************** 2026-01-02 00:25:25.491400 | orchestrator | 2026-01-02 00:25:25.491411 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-01-02 00:25:25.491423 | orchestrator | Friday 02 January 2026 00:25:18 +0000 (0:00:00.719) 0:00:03.953 ******** 2026-01-02 00:25:25.491434 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:25:25.491446 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:25:25.491460 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:25:25.491473 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:25:25.491486 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:25:25.491498 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:25:25.491511 | orchestrator | 2026-01-02 00:25:25.491524 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-01-02 00:25:25.491537 | orchestrator | Friday 02 January 2026 00:25:18 +0000 (0:00:00.192) 0:00:04.145 ******** 2026-01-02 00:25:25.491550 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:25:25.491562 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:25:25.491580 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:25:25.491600 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:25:25.491619 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:25:25.491637 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:25:25.491655 | orchestrator | 2026-01-02 00:25:25.491674 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-01-02 00:25:25.491693 | orchestrator | Friday 02 January 2026 00:25:18 +0000 (0:00:00.140) 0:00:04.285 ******** 2026-01-02 00:25:25.491711 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:25:25.491754 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:25:25.491779 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:25:25.491793 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:25:25.491806 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:25:25.491819 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:25:25.491832 | orchestrator | 2026-01-02 00:25:25.491843 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-01-02 00:25:25.491855 | orchestrator | Friday 02 January 2026 00:25:19 +0000 (0:00:00.587) 0:00:04.873 ******** 2026-01-02 00:25:25.491866 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:25:25.491877 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:25:25.491888 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:25:25.491899 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:25:25.491910 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:25:25.491944 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:25:25.491956 | orchestrator | 2026-01-02 00:25:25.491967 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-01-02 00:25:25.491978 | orchestrator | Friday 02 January 2026 00:25:19 +0000 (0:00:00.774) 0:00:05.647 ******** 2026-01-02 00:25:25.491989 | orchestrator | changed: [testbed-node-0] => (item=adm) 2026-01-02 00:25:25.492000 | orchestrator | changed: [testbed-node-1] => (item=adm) 2026-01-02 00:25:25.492011 | orchestrator | changed: [testbed-node-2] => (item=adm) 2026-01-02 00:25:25.492021 | orchestrator | changed: [testbed-node-4] => (item=adm) 2026-01-02 00:25:25.492032 | orchestrator | changed: [testbed-node-3] => (item=adm) 2026-01-02 00:25:25.492043 | orchestrator | changed: [testbed-node-5] => (item=adm) 2026-01-02 00:25:25.492053 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2026-01-02 00:25:25.492064 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2026-01-02 00:25:25.492074 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2026-01-02 00:25:25.492085 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2026-01-02 00:25:25.492096 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2026-01-02 00:25:25.492117 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2026-01-02 00:25:25.492128 | orchestrator | 2026-01-02 00:25:25.492139 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-01-02 00:25:25.492150 | orchestrator | Friday 02 January 2026 00:25:21 +0000 (0:00:01.166) 0:00:06.814 ******** 2026-01-02 00:25:25.492161 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:25:25.492172 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:25:25.492182 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:25:25.492193 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:25:25.492204 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:25:25.492215 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:25:25.492226 | orchestrator | 2026-01-02 00:25:25.492237 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-01-02 00:25:25.492249 | orchestrator | Friday 02 January 2026 00:25:22 +0000 (0:00:01.161) 0:00:07.976 ******** 2026-01-02 00:25:25.492259 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2026-01-02 00:25:25.492271 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2026-01-02 00:25:25.492282 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2026-01-02 00:25:25.492293 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2026-01-02 00:25:25.492326 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2026-01-02 00:25:25.492338 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2026-01-02 00:25:25.492349 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2026-01-02 00:25:25.492359 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2026-01-02 00:25:25.492370 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2026-01-02 00:25:25.492381 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2026-01-02 00:25:25.492392 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2026-01-02 00:25:25.492403 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2026-01-02 00:25:25.492414 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2026-01-02 00:25:25.492425 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2026-01-02 00:25:25.492441 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2026-01-02 00:25:25.492452 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2026-01-02 00:25:25.492463 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2026-01-02 00:25:25.492473 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2026-01-02 00:25:25.492484 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2026-01-02 00:25:25.492495 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2026-01-02 00:25:25.492506 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2026-01-02 00:25:25.492516 | orchestrator | 2026-01-02 00:25:25.492527 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-01-02 00:25:25.492539 | orchestrator | Friday 02 January 2026 00:25:23 +0000 (0:00:01.201) 0:00:09.177 ******** 2026-01-02 00:25:25.492550 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:25:25.492561 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:25:25.492571 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:25:25.492582 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:25:25.492593 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:25:25.492604 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:25:25.492615 | orchestrator | 2026-01-02 00:25:25.492626 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-01-02 00:25:25.492637 | orchestrator | Friday 02 January 2026 00:25:23 +0000 (0:00:00.133) 0:00:09.311 ******** 2026-01-02 00:25:25.492648 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:25:25.492666 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:25:25.492677 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:25:25.492688 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:25:25.492698 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:25:25.492709 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:25:25.492720 | orchestrator | 2026-01-02 00:25:25.492731 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-01-02 00:25:25.492742 | orchestrator | Friday 02 January 2026 00:25:23 +0000 (0:00:00.159) 0:00:09.471 ******** 2026-01-02 00:25:25.492753 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:25:25.492764 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:25:25.492775 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:25:25.492786 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:25:25.492797 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:25:25.492808 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:25:25.492818 | orchestrator | 2026-01-02 00:25:25.492829 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-01-02 00:25:25.492840 | orchestrator | Friday 02 January 2026 00:25:24 +0000 (0:00:00.554) 0:00:10.025 ******** 2026-01-02 00:25:25.492851 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:25:25.492862 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:25:25.492873 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:25:25.492884 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:25:25.492895 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:25:25.492906 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:25:25.492917 | orchestrator | 2026-01-02 00:25:25.492944 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-01-02 00:25:25.492955 | orchestrator | Friday 02 January 2026 00:25:24 +0000 (0:00:00.164) 0:00:10.189 ******** 2026-01-02 00:25:25.492966 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-01-02 00:25:25.492977 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-01-02 00:25:25.492988 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:25:25.492999 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:25:25.493010 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-01-02 00:25:25.493021 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:25:25.493031 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-01-02 00:25:25.493042 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:25:25.493053 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-01-02 00:25:25.493064 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:25:25.493075 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-01-02 00:25:25.493086 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:25:25.493097 | orchestrator | 2026-01-02 00:25:25.493108 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-01-02 00:25:25.493119 | orchestrator | Friday 02 January 2026 00:25:25 +0000 (0:00:00.695) 0:00:10.885 ******** 2026-01-02 00:25:25.493130 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:25:25.493141 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:25:25.493152 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:25:25.493163 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:25:25.493173 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:25:25.493184 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:25:25.493195 | orchestrator | 2026-01-02 00:25:25.493206 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-01-02 00:25:25.493216 | orchestrator | Friday 02 January 2026 00:25:25 +0000 (0:00:00.150) 0:00:11.035 ******** 2026-01-02 00:25:25.493227 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:25:25.493238 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:25:25.493249 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:25:25.493260 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:25:25.493279 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:25:26.725834 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:25:26.726852 | orchestrator | 2026-01-02 00:25:26.726890 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-01-02 00:25:26.726905 | orchestrator | Friday 02 January 2026 00:25:25 +0000 (0:00:00.147) 0:00:11.183 ******** 2026-01-02 00:25:26.726917 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:25:26.726965 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:25:26.726977 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:25:26.726989 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:25:26.727000 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:25:26.727011 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:25:26.727022 | orchestrator | 2026-01-02 00:25:26.727033 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-01-02 00:25:26.727044 | orchestrator | Friday 02 January 2026 00:25:25 +0000 (0:00:00.127) 0:00:11.311 ******** 2026-01-02 00:25:26.727055 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:25:26.727066 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:25:26.727077 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:25:26.727088 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:25:26.727099 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:25:26.727110 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:25:26.727121 | orchestrator | 2026-01-02 00:25:26.727132 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-01-02 00:25:26.727143 | orchestrator | Friday 02 January 2026 00:25:26 +0000 (0:00:00.660) 0:00:11.971 ******** 2026-01-02 00:25:26.727154 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:25:26.727165 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:25:26.727176 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:25:26.727187 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:25:26.727198 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:25:26.727209 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:25:26.727220 | orchestrator | 2026-01-02 00:25:26.727231 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-02 00:25:26.727243 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-02 00:25:26.727256 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-02 00:25:26.727267 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-02 00:25:26.727297 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-02 00:25:26.727309 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-02 00:25:26.727320 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-02 00:25:26.727331 | orchestrator | 2026-01-02 00:25:26.727342 | orchestrator | 2026-01-02 00:25:26.727353 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-02 00:25:26.727364 | orchestrator | Friday 02 January 2026 00:25:26 +0000 (0:00:00.223) 0:00:12.194 ******** 2026-01-02 00:25:26.727375 | orchestrator | =============================================================================== 2026-01-02 00:25:26.727386 | orchestrator | Gathering Facts --------------------------------------------------------- 3.13s 2026-01-02 00:25:26.727398 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.20s 2026-01-02 00:25:26.727410 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.17s 2026-01-02 00:25:26.727421 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.16s 2026-01-02 00:25:26.727432 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.77s 2026-01-02 00:25:26.727452 | orchestrator | Do not require tty for all users ---------------------------------------- 0.72s 2026-01-02 00:25:26.727463 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.70s 2026-01-02 00:25:26.727474 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.66s 2026-01-02 00:25:26.727485 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.59s 2026-01-02 00:25:26.727496 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.55s 2026-01-02 00:25:26.727507 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.22s 2026-01-02 00:25:26.727518 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.19s 2026-01-02 00:25:26.727529 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.16s 2026-01-02 00:25:26.727540 | orchestrator | osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file --- 0.16s 2026-01-02 00:25:26.727551 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.15s 2026-01-02 00:25:26.727562 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.15s 2026-01-02 00:25:26.727573 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.14s 2026-01-02 00:25:26.727584 | orchestrator | osism.commons.operator : Set custom environment variables in .bashrc configuration file --- 0.13s 2026-01-02 00:25:26.727595 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.13s 2026-01-02 00:25:26.975426 | orchestrator | + osism apply --environment custom facts 2026-01-02 00:25:28.919861 | orchestrator | 2026-01-02 00:25:28 | INFO  | Trying to run play facts in environment custom 2026-01-02 00:25:39.073489 | orchestrator | 2026-01-02 00:25:39 | INFO  | Task 5077b730-a5ac-4221-a9c5-33da336a0eb6 (facts) was prepared for execution. 2026-01-02 00:25:39.073606 | orchestrator | 2026-01-02 00:25:39 | INFO  | It takes a moment until task 5077b730-a5ac-4221-a9c5-33da336a0eb6 (facts) has been started and output is visible here. 2026-01-02 00:26:21.343785 | orchestrator | 2026-01-02 00:26:21.343979 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2026-01-02 00:26:21.344003 | orchestrator | 2026-01-02 00:26:21.344015 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-01-02 00:26:21.344043 | orchestrator | Friday 02 January 2026 00:25:42 +0000 (0:00:00.065) 0:00:00.065 ******** 2026-01-02 00:26:21.344055 | orchestrator | ok: [testbed-manager] 2026-01-02 00:26:21.344068 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:26:21.344080 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:26:21.344091 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:26:21.344102 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:26:21.344113 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:26:21.344124 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:26:21.344135 | orchestrator | 2026-01-02 00:26:21.344146 | orchestrator | TASK [Copy fact file] ********************************************************** 2026-01-02 00:26:21.344156 | orchestrator | Friday 02 January 2026 00:25:44 +0000 (0:00:01.444) 0:00:01.509 ******** 2026-01-02 00:26:21.344167 | orchestrator | ok: [testbed-manager] 2026-01-02 00:26:21.344178 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:26:21.344189 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:26:21.344200 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:26:21.344211 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:26:21.344223 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:26:21.344234 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:26:21.344245 | orchestrator | 2026-01-02 00:26:21.344256 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2026-01-02 00:26:21.344267 | orchestrator | 2026-01-02 00:26:21.344278 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-01-02 00:26:21.344289 | orchestrator | Friday 02 January 2026 00:25:45 +0000 (0:00:01.150) 0:00:02.659 ******** 2026-01-02 00:26:21.344323 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:26:21.344337 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:26:21.344349 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:26:21.344361 | orchestrator | 2026-01-02 00:26:21.344374 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-01-02 00:26:21.344388 | orchestrator | Friday 02 January 2026 00:25:45 +0000 (0:00:00.091) 0:00:02.751 ******** 2026-01-02 00:26:21.344401 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:26:21.344414 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:26:21.344427 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:26:21.344438 | orchestrator | 2026-01-02 00:26:21.344449 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-01-02 00:26:21.344460 | orchestrator | Friday 02 January 2026 00:25:45 +0000 (0:00:00.190) 0:00:02.942 ******** 2026-01-02 00:26:21.344471 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:26:21.344482 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:26:21.344492 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:26:21.344503 | orchestrator | 2026-01-02 00:26:21.344514 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-01-02 00:26:21.344525 | orchestrator | Friday 02 January 2026 00:25:45 +0000 (0:00:00.181) 0:00:03.123 ******** 2026-01-02 00:26:21.344537 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-02 00:26:21.344549 | orchestrator | 2026-01-02 00:26:21.344560 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-01-02 00:26:21.344571 | orchestrator | Friday 02 January 2026 00:25:45 +0000 (0:00:00.106) 0:00:03.230 ******** 2026-01-02 00:26:21.344581 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:26:21.344597 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:26:21.344615 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:26:21.344634 | orchestrator | 2026-01-02 00:26:21.344654 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-01-02 00:26:21.344673 | orchestrator | Friday 02 January 2026 00:25:46 +0000 (0:00:00.404) 0:00:03.634 ******** 2026-01-02 00:26:21.344691 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:26:21.344702 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:26:21.344713 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:26:21.344724 | orchestrator | 2026-01-02 00:26:21.344735 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-01-02 00:26:21.344746 | orchestrator | Friday 02 January 2026 00:25:46 +0000 (0:00:00.095) 0:00:03.730 ******** 2026-01-02 00:26:21.344756 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:26:21.344767 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:26:21.344778 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:26:21.344788 | orchestrator | 2026-01-02 00:26:21.344799 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-01-02 00:26:21.344810 | orchestrator | Friday 02 January 2026 00:25:47 +0000 (0:00:00.996) 0:00:04.727 ******** 2026-01-02 00:26:21.344821 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:26:21.344831 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:26:21.344842 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:26:21.344853 | orchestrator | 2026-01-02 00:26:21.344864 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-01-02 00:26:21.344875 | orchestrator | Friday 02 January 2026 00:25:47 +0000 (0:00:00.429) 0:00:05.156 ******** 2026-01-02 00:26:21.344885 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:26:21.344897 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:26:21.344940 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:26:21.344963 | orchestrator | 2026-01-02 00:26:21.344983 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-01-02 00:26:21.345002 | orchestrator | Friday 02 January 2026 00:25:48 +0000 (0:00:01.019) 0:00:06.175 ******** 2026-01-02 00:26:21.345021 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:26:21.345054 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:26:21.345072 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:26:21.345089 | orchestrator | 2026-01-02 00:26:21.345101 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2026-01-02 00:26:21.345111 | orchestrator | Friday 02 January 2026 00:26:04 +0000 (0:00:15.809) 0:00:21.984 ******** 2026-01-02 00:26:21.345122 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:26:21.345133 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:26:21.345144 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:26:21.345155 | orchestrator | 2026-01-02 00:26:21.345166 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2026-01-02 00:26:21.345202 | orchestrator | Friday 02 January 2026 00:26:04 +0000 (0:00:00.081) 0:00:22.066 ******** 2026-01-02 00:26:21.345222 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:26:21.345240 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:26:21.345259 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:26:21.345277 | orchestrator | 2026-01-02 00:26:21.345294 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-01-02 00:26:21.345305 | orchestrator | Friday 02 January 2026 00:26:12 +0000 (0:00:07.929) 0:00:29.995 ******** 2026-01-02 00:26:21.345316 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:26:21.345327 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:26:21.345337 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:26:21.345348 | orchestrator | 2026-01-02 00:26:21.345359 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-01-02 00:26:21.345370 | orchestrator | Friday 02 January 2026 00:26:13 +0000 (0:00:00.424) 0:00:30.420 ******** 2026-01-02 00:26:21.345380 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2026-01-02 00:26:21.345392 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2026-01-02 00:26:21.345402 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2026-01-02 00:26:21.345413 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2026-01-02 00:26:21.345424 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2026-01-02 00:26:21.345435 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2026-01-02 00:26:21.345445 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2026-01-02 00:26:21.345456 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2026-01-02 00:26:21.345467 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2026-01-02 00:26:21.345477 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2026-01-02 00:26:21.345489 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2026-01-02 00:26:21.345499 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2026-01-02 00:26:21.345510 | orchestrator | 2026-01-02 00:26:21.345521 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-01-02 00:26:21.345532 | orchestrator | Friday 02 January 2026 00:26:16 +0000 (0:00:03.259) 0:00:33.680 ******** 2026-01-02 00:26:21.345543 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:26:21.345553 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:26:21.345564 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:26:21.345575 | orchestrator | 2026-01-02 00:26:21.345586 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-01-02 00:26:21.345596 | orchestrator | 2026-01-02 00:26:21.345607 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-01-02 00:26:21.345618 | orchestrator | Friday 02 January 2026 00:26:17 +0000 (0:00:01.317) 0:00:34.997 ******** 2026-01-02 00:26:21.345629 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:26:21.345640 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:26:21.345650 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:26:21.345661 | orchestrator | ok: [testbed-manager] 2026-01-02 00:26:21.345672 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:26:21.345682 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:26:21.345701 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:26:21.345712 | orchestrator | 2026-01-02 00:26:21.345723 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-02 00:26:21.345735 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-02 00:26:21.345746 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-02 00:26:21.345800 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-02 00:26:21.345813 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-02 00:26:21.345824 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-02 00:26:21.345835 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-02 00:26:21.345846 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-02 00:26:21.345856 | orchestrator | 2026-01-02 00:26:21.345867 | orchestrator | 2026-01-02 00:26:21.345878 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-02 00:26:21.345889 | orchestrator | Friday 02 January 2026 00:26:21 +0000 (0:00:03.602) 0:00:38.600 ******** 2026-01-02 00:26:21.345900 | orchestrator | =============================================================================== 2026-01-02 00:26:21.345911 | orchestrator | osism.commons.repository : Update package cache ------------------------ 15.81s 2026-01-02 00:26:21.345947 | orchestrator | Install required packages (Debian) -------------------------------------- 7.93s 2026-01-02 00:26:21.345959 | orchestrator | Gathers facts about hosts ----------------------------------------------- 3.60s 2026-01-02 00:26:21.345969 | orchestrator | Copy fact files --------------------------------------------------------- 3.26s 2026-01-02 00:26:21.345980 | orchestrator | Create custom facts directory ------------------------------------------- 1.44s 2026-01-02 00:26:21.345991 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.32s 2026-01-02 00:26:21.346011 | orchestrator | Copy fact file ---------------------------------------------------------- 1.15s 2026-01-02 00:26:21.550697 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.02s 2026-01-02 00:26:21.550792 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.00s 2026-01-02 00:26:21.550823 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.43s 2026-01-02 00:26:21.550836 | orchestrator | Create custom facts directory ------------------------------------------- 0.42s 2026-01-02 00:26:21.550846 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.40s 2026-01-02 00:26:21.550857 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.19s 2026-01-02 00:26:21.550869 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.18s 2026-01-02 00:26:21.550880 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.11s 2026-01-02 00:26:21.550892 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.10s 2026-01-02 00:26:21.550903 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.09s 2026-01-02 00:26:21.550914 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.08s 2026-01-02 00:26:21.860978 | orchestrator | + osism apply bootstrap 2026-01-02 00:26:34.046877 | orchestrator | 2026-01-02 00:26:34 | INFO  | Task 4841a8a8-50c8-4014-8bfb-451d0773ccab (bootstrap) was prepared for execution. 2026-01-02 00:26:34.047062 | orchestrator | 2026-01-02 00:26:34 | INFO  | It takes a moment until task 4841a8a8-50c8-4014-8bfb-451d0773ccab (bootstrap) has been started and output is visible here. 2026-01-02 00:26:49.406444 | orchestrator | 2026-01-02 00:26:49.407392 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2026-01-02 00:26:49.407430 | orchestrator | 2026-01-02 00:26:49.407443 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2026-01-02 00:26:49.407455 | orchestrator | Friday 02 January 2026 00:26:38 +0000 (0:00:00.139) 0:00:00.139 ******** 2026-01-02 00:26:49.407466 | orchestrator | ok: [testbed-manager] 2026-01-02 00:26:49.407480 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:26:49.407492 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:26:49.407504 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:26:49.407515 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:26:49.407526 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:26:49.407538 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:26:49.407549 | orchestrator | 2026-01-02 00:26:49.407560 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-01-02 00:26:49.407571 | orchestrator | 2026-01-02 00:26:49.407582 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-01-02 00:26:49.407594 | orchestrator | Friday 02 January 2026 00:26:38 +0000 (0:00:00.182) 0:00:00.321 ******** 2026-01-02 00:26:49.407605 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:26:49.407616 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:26:49.407627 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:26:49.407639 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:26:49.407650 | orchestrator | ok: [testbed-manager] 2026-01-02 00:26:49.407661 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:26:49.407673 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:26:49.407684 | orchestrator | 2026-01-02 00:26:49.407695 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2026-01-02 00:26:49.407709 | orchestrator | 2026-01-02 00:26:49.407729 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-01-02 00:26:49.407741 | orchestrator | Friday 02 January 2026 00:26:42 +0000 (0:00:03.925) 0:00:04.246 ******** 2026-01-02 00:26:49.407752 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-01-02 00:26:49.407764 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-01-02 00:26:49.407775 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-01-02 00:26:49.407786 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2026-01-02 00:26:49.407797 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-01-02 00:26:49.407808 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-02 00:26:49.407819 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2026-01-02 00:26:49.407829 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-01-02 00:26:49.407840 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-02 00:26:49.407851 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-01-02 00:26:49.407862 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-02 00:26:49.407873 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-01-02 00:26:49.407884 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-01-02 00:26:49.407895 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2026-01-02 00:26:49.407906 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-01-02 00:26:49.407939 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-01-02 00:26:49.407952 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:26:49.407964 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-01-02 00:26:49.407975 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-01-02 00:26:49.407986 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2026-01-02 00:26:49.408001 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-01-02 00:26:49.408041 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-01-02 00:26:49.408053 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-01-02 00:26:49.408064 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-01-02 00:26:49.408074 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-01-02 00:26:49.408085 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-01-02 00:26:49.408096 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:26:49.408107 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-01-02 00:26:49.408118 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2026-01-02 00:26:49.408143 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:26:49.408154 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-01-02 00:26:49.408165 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-01-02 00:26:49.408176 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-01-02 00:26:49.408187 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-01-02 00:26:49.408197 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-01-02 00:26:49.408208 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2026-01-02 00:26:49.408219 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-01-02 00:26:49.408229 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-01-02 00:26:49.408240 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-01-02 00:26:49.408251 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-01-02 00:26:49.408262 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-01-02 00:26:49.408272 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:26:49.408283 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-01-02 00:26:49.408294 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-01-02 00:26:49.408305 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-01-02 00:26:49.408316 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-01-02 00:26:49.408327 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:26:49.408359 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-01-02 00:26:49.408372 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-01-02 00:26:49.408383 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-01-02 00:26:49.408393 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-01-02 00:26:49.408404 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-01-02 00:26:49.408415 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-01-02 00:26:49.408426 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:26:49.408437 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-01-02 00:26:49.408448 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:26:49.408459 | orchestrator | 2026-01-02 00:26:49.408470 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2026-01-02 00:26:49.408481 | orchestrator | 2026-01-02 00:26:49.408492 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2026-01-02 00:26:49.408503 | orchestrator | Friday 02 January 2026 00:26:42 +0000 (0:00:00.382) 0:00:04.629 ******** 2026-01-02 00:26:49.408513 | orchestrator | ok: [testbed-manager] 2026-01-02 00:26:49.408524 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:26:49.408535 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:26:49.408546 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:26:49.408557 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:26:49.408568 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:26:49.408579 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:26:49.408590 | orchestrator | 2026-01-02 00:26:49.408601 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2026-01-02 00:26:49.408620 | orchestrator | Friday 02 January 2026 00:26:43 +0000 (0:00:01.156) 0:00:05.785 ******** 2026-01-02 00:26:49.408631 | orchestrator | ok: [testbed-manager] 2026-01-02 00:26:49.408642 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:26:49.408653 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:26:49.408664 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:26:49.408675 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:26:49.408686 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:26:49.408697 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:26:49.408708 | orchestrator | 2026-01-02 00:26:49.408719 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2026-01-02 00:26:49.408732 | orchestrator | Friday 02 January 2026 00:26:44 +0000 (0:00:01.072) 0:00:06.858 ******** 2026-01-02 00:26:49.408751 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:26:49.408765 | orchestrator | 2026-01-02 00:26:49.408776 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2026-01-02 00:26:49.408788 | orchestrator | Friday 02 January 2026 00:26:45 +0000 (0:00:00.239) 0:00:07.097 ******** 2026-01-02 00:26:49.408799 | orchestrator | changed: [testbed-manager] 2026-01-02 00:26:49.408810 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:26:49.408821 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:26:49.408832 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:26:49.408843 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:26:49.408988 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:26:49.409013 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:26:49.409027 | orchestrator | 2026-01-02 00:26:49.409038 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2026-01-02 00:26:49.409049 | orchestrator | Friday 02 January 2026 00:26:46 +0000 (0:00:01.834) 0:00:08.931 ******** 2026-01-02 00:26:49.409060 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:26:49.409072 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:26:49.409085 | orchestrator | 2026-01-02 00:26:49.409096 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2026-01-02 00:26:49.409107 | orchestrator | Friday 02 January 2026 00:26:47 +0000 (0:00:00.207) 0:00:09.139 ******** 2026-01-02 00:26:49.409117 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:26:49.409128 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:26:49.409139 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:26:49.409149 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:26:49.409200 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:26:49.409211 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:26:49.409222 | orchestrator | 2026-01-02 00:26:49.409234 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2026-01-02 00:26:49.409245 | orchestrator | Friday 02 January 2026 00:26:48 +0000 (0:00:01.062) 0:00:10.201 ******** 2026-01-02 00:26:49.409256 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:26:49.409267 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:26:49.409277 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:26:49.409288 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:26:49.409299 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:26:49.409310 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:26:49.409321 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:26:49.409332 | orchestrator | 2026-01-02 00:26:49.409343 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2026-01-02 00:26:49.409354 | orchestrator | Friday 02 January 2026 00:26:48 +0000 (0:00:00.578) 0:00:10.779 ******** 2026-01-02 00:26:49.409365 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:26:49.409375 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:26:49.409407 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:26:49.409474 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:26:49.409487 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:26:49.409498 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:26:49.409509 | orchestrator | ok: [testbed-manager] 2026-01-02 00:26:49.409520 | orchestrator | 2026-01-02 00:26:49.409530 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-01-02 00:26:49.409543 | orchestrator | Friday 02 January 2026 00:26:49 +0000 (0:00:00.406) 0:00:11.185 ******** 2026-01-02 00:26:49.409554 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:26:49.409565 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:26:49.409587 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:27:01.561379 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:27:01.561498 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:27:01.561517 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:27:01.561529 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:27:01.561541 | orchestrator | 2026-01-02 00:27:01.561554 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-01-02 00:27:01.561566 | orchestrator | Friday 02 January 2026 00:26:49 +0000 (0:00:00.245) 0:00:11.431 ******** 2026-01-02 00:27:01.561579 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:27:01.561609 | orchestrator | 2026-01-02 00:27:01.561621 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-01-02 00:27:01.561633 | orchestrator | Friday 02 January 2026 00:26:49 +0000 (0:00:00.296) 0:00:11.728 ******** 2026-01-02 00:27:01.561644 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:27:01.561655 | orchestrator | 2026-01-02 00:27:01.561667 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-01-02 00:27:01.561678 | orchestrator | Friday 02 January 2026 00:26:50 +0000 (0:00:00.294) 0:00:12.023 ******** 2026-01-02 00:27:01.561689 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:27:01.561701 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:27:01.561712 | orchestrator | ok: [testbed-manager] 2026-01-02 00:27:01.561723 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:27:01.561734 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:27:01.561745 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:27:01.561755 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:27:01.561766 | orchestrator | 2026-01-02 00:27:01.561778 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-01-02 00:27:01.561789 | orchestrator | Friday 02 January 2026 00:26:51 +0000 (0:00:01.371) 0:00:13.395 ******** 2026-01-02 00:27:01.561800 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:27:01.561811 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:27:01.561823 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:27:01.561834 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:27:01.561845 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:27:01.561856 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:27:01.561867 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:27:01.561880 | orchestrator | 2026-01-02 00:27:01.561893 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-01-02 00:27:01.561907 | orchestrator | Friday 02 January 2026 00:26:51 +0000 (0:00:00.235) 0:00:13.630 ******** 2026-01-02 00:27:01.561949 | orchestrator | ok: [testbed-manager] 2026-01-02 00:27:01.561963 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:27:01.561976 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:27:01.561989 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:27:01.562002 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:27:01.562105 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:27:01.562123 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:27:01.562137 | orchestrator | 2026-01-02 00:27:01.562151 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-01-02 00:27:01.562165 | orchestrator | Friday 02 January 2026 00:26:52 +0000 (0:00:00.598) 0:00:14.229 ******** 2026-01-02 00:27:01.562177 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:27:01.562190 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:27:01.562203 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:27:01.562216 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:27:01.562230 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:27:01.562241 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:27:01.562253 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:27:01.562264 | orchestrator | 2026-01-02 00:27:01.562275 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-01-02 00:27:01.562288 | orchestrator | Friday 02 January 2026 00:26:52 +0000 (0:00:00.256) 0:00:14.486 ******** 2026-01-02 00:27:01.562298 | orchestrator | ok: [testbed-manager] 2026-01-02 00:27:01.562309 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:27:01.562320 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:27:01.562331 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:27:01.562342 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:27:01.562362 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:27:01.562373 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:27:01.562384 | orchestrator | 2026-01-02 00:27:01.562395 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-01-02 00:27:01.562406 | orchestrator | Friday 02 January 2026 00:26:53 +0000 (0:00:00.651) 0:00:15.137 ******** 2026-01-02 00:27:01.562417 | orchestrator | ok: [testbed-manager] 2026-01-02 00:27:01.562428 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:27:01.562439 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:27:01.562450 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:27:01.562461 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:27:01.562471 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:27:01.562482 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:27:01.562493 | orchestrator | 2026-01-02 00:27:01.562504 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-01-02 00:27:01.562515 | orchestrator | Friday 02 January 2026 00:26:54 +0000 (0:00:01.003) 0:00:16.141 ******** 2026-01-02 00:27:01.562526 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:27:01.562537 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:27:01.562548 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:27:01.562559 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:27:01.562570 | orchestrator | ok: [testbed-manager] 2026-01-02 00:27:01.562581 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:27:01.562592 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:27:01.562603 | orchestrator | 2026-01-02 00:27:01.562614 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-01-02 00:27:01.562625 | orchestrator | Friday 02 January 2026 00:26:55 +0000 (0:00:00.986) 0:00:17.127 ******** 2026-01-02 00:27:01.562656 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:27:01.562668 | orchestrator | 2026-01-02 00:27:01.562679 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-01-02 00:27:01.562690 | orchestrator | Friday 02 January 2026 00:26:55 +0000 (0:00:00.330) 0:00:17.458 ******** 2026-01-02 00:27:01.562701 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:27:01.562712 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:27:01.562723 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:27:01.562734 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:27:01.562744 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:27:01.562755 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:27:01.562774 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:27:01.562785 | orchestrator | 2026-01-02 00:27:01.562796 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-01-02 00:27:01.562808 | orchestrator | Friday 02 January 2026 00:26:56 +0000 (0:00:01.169) 0:00:18.628 ******** 2026-01-02 00:27:01.562818 | orchestrator | ok: [testbed-manager] 2026-01-02 00:27:01.562829 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:27:01.562840 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:27:01.562851 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:27:01.562862 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:27:01.562873 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:27:01.562884 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:27:01.562895 | orchestrator | 2026-01-02 00:27:01.562906 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-01-02 00:27:01.562953 | orchestrator | Friday 02 January 2026 00:26:56 +0000 (0:00:00.238) 0:00:18.866 ******** 2026-01-02 00:27:01.562966 | orchestrator | ok: [testbed-manager] 2026-01-02 00:27:01.562977 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:27:01.562988 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:27:01.562999 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:27:01.563010 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:27:01.563022 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:27:01.563033 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:27:01.563044 | orchestrator | 2026-01-02 00:27:01.563055 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-01-02 00:27:01.563066 | orchestrator | Friday 02 January 2026 00:26:57 +0000 (0:00:00.245) 0:00:19.112 ******** 2026-01-02 00:27:01.563077 | orchestrator | ok: [testbed-manager] 2026-01-02 00:27:01.563088 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:27:01.563099 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:27:01.563110 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:27:01.563121 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:27:01.563131 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:27:01.563142 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:27:01.563153 | orchestrator | 2026-01-02 00:27:01.563164 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-01-02 00:27:01.563175 | orchestrator | Friday 02 January 2026 00:26:57 +0000 (0:00:00.227) 0:00:19.339 ******** 2026-01-02 00:27:01.563187 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:27:01.563200 | orchestrator | 2026-01-02 00:27:01.563211 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-01-02 00:27:01.563222 | orchestrator | Friday 02 January 2026 00:26:57 +0000 (0:00:00.328) 0:00:19.668 ******** 2026-01-02 00:27:01.563233 | orchestrator | ok: [testbed-manager] 2026-01-02 00:27:01.563244 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:27:01.563255 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:27:01.563265 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:27:01.563276 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:27:01.563287 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:27:01.563298 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:27:01.563309 | orchestrator | 2026-01-02 00:27:01.563320 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-01-02 00:27:01.563331 | orchestrator | Friday 02 January 2026 00:26:58 +0000 (0:00:00.726) 0:00:20.394 ******** 2026-01-02 00:27:01.563342 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:27:01.563353 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:27:01.563364 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:27:01.563434 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:27:01.563446 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:27:01.563458 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:27:01.563475 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:27:01.563486 | orchestrator | 2026-01-02 00:27:01.563506 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-01-02 00:27:01.563517 | orchestrator | Friday 02 January 2026 00:26:58 +0000 (0:00:00.277) 0:00:20.672 ******** 2026-01-02 00:27:01.563528 | orchestrator | ok: [testbed-manager] 2026-01-02 00:27:01.563539 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:27:01.563550 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:27:01.563561 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:27:01.563572 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:27:01.563583 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:27:01.563594 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:27:01.563605 | orchestrator | 2026-01-02 00:27:01.563616 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-01-02 00:27:01.563627 | orchestrator | Friday 02 January 2026 00:26:59 +0000 (0:00:01.059) 0:00:21.731 ******** 2026-01-02 00:27:01.563638 | orchestrator | ok: [testbed-manager] 2026-01-02 00:27:01.563649 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:27:01.563660 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:27:01.563671 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:27:01.563682 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:27:01.563693 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:27:01.563704 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:27:01.563715 | orchestrator | 2026-01-02 00:27:01.563726 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-01-02 00:27:01.563737 | orchestrator | Friday 02 January 2026 00:27:00 +0000 (0:00:00.598) 0:00:22.330 ******** 2026-01-02 00:27:01.563748 | orchestrator | ok: [testbed-manager] 2026-01-02 00:27:01.563759 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:27:01.563770 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:27:01.563781 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:27:01.563802 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:27:41.210271 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:27:41.210393 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:27:41.210413 | orchestrator | 2026-01-02 00:27:41.210427 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-01-02 00:27:41.210440 | orchestrator | Friday 02 January 2026 00:27:01 +0000 (0:00:01.161) 0:00:23.492 ******** 2026-01-02 00:27:41.210452 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:27:41.210465 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:27:41.210476 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:27:41.210487 | orchestrator | changed: [testbed-manager] 2026-01-02 00:27:41.210498 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:27:41.210510 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:27:41.210521 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:27:41.210532 | orchestrator | 2026-01-02 00:27:41.210543 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2026-01-02 00:27:41.210554 | orchestrator | Friday 02 January 2026 00:27:18 +0000 (0:00:16.454) 0:00:39.947 ******** 2026-01-02 00:27:41.210566 | orchestrator | ok: [testbed-manager] 2026-01-02 00:27:41.210577 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:27:41.210589 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:27:41.210600 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:27:41.210611 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:27:41.210622 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:27:41.210633 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:27:41.210643 | orchestrator | 2026-01-02 00:27:41.210655 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2026-01-02 00:27:41.210666 | orchestrator | Friday 02 January 2026 00:27:18 +0000 (0:00:00.237) 0:00:40.185 ******** 2026-01-02 00:27:41.210677 | orchestrator | ok: [testbed-manager] 2026-01-02 00:27:41.210688 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:27:41.210699 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:27:41.210710 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:27:41.210721 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:27:41.210732 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:27:41.210742 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:27:41.210778 | orchestrator | 2026-01-02 00:27:41.210789 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2026-01-02 00:27:41.210800 | orchestrator | Friday 02 January 2026 00:27:18 +0000 (0:00:00.239) 0:00:40.424 ******** 2026-01-02 00:27:41.210811 | orchestrator | ok: [testbed-manager] 2026-01-02 00:27:41.210822 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:27:41.210833 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:27:41.210844 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:27:41.210855 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:27:41.210866 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:27:41.210876 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:27:41.210887 | orchestrator | 2026-01-02 00:27:41.210898 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2026-01-02 00:27:41.210909 | orchestrator | Friday 02 January 2026 00:27:18 +0000 (0:00:00.237) 0:00:40.662 ******** 2026-01-02 00:27:41.210962 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:27:41.210976 | orchestrator | 2026-01-02 00:27:41.210987 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2026-01-02 00:27:41.210999 | orchestrator | Friday 02 January 2026 00:27:19 +0000 (0:00:00.297) 0:00:40.959 ******** 2026-01-02 00:27:41.211009 | orchestrator | ok: [testbed-manager] 2026-01-02 00:27:41.211020 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:27:41.211031 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:27:41.211042 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:27:41.211053 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:27:41.211064 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:27:41.211074 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:27:41.211085 | orchestrator | 2026-01-02 00:27:41.211096 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2026-01-02 00:27:41.211107 | orchestrator | Friday 02 January 2026 00:27:20 +0000 (0:00:01.621) 0:00:42.581 ******** 2026-01-02 00:27:41.211118 | orchestrator | changed: [testbed-manager] 2026-01-02 00:27:41.211129 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:27:41.211140 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:27:41.211150 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:27:41.211161 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:27:41.211172 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:27:41.211183 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:27:41.211194 | orchestrator | 2026-01-02 00:27:41.211205 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2026-01-02 00:27:41.211216 | orchestrator | Friday 02 January 2026 00:27:21 +0000 (0:00:01.124) 0:00:43.705 ******** 2026-01-02 00:27:41.211227 | orchestrator | ok: [testbed-manager] 2026-01-02 00:27:41.211238 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:27:41.211267 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:27:41.211278 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:27:41.211289 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:27:41.211300 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:27:41.211311 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:27:41.211322 | orchestrator | 2026-01-02 00:27:41.211333 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2026-01-02 00:27:41.211344 | orchestrator | Friday 02 January 2026 00:27:22 +0000 (0:00:00.884) 0:00:44.590 ******** 2026-01-02 00:27:41.211356 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:27:41.211369 | orchestrator | 2026-01-02 00:27:41.211380 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2026-01-02 00:27:41.211392 | orchestrator | Friday 02 January 2026 00:27:22 +0000 (0:00:00.308) 0:00:44.898 ******** 2026-01-02 00:27:41.211403 | orchestrator | changed: [testbed-manager] 2026-01-02 00:27:41.211422 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:27:41.211433 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:27:41.211444 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:27:41.211455 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:27:41.211466 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:27:41.211477 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:27:41.211488 | orchestrator | 2026-01-02 00:27:41.211518 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2026-01-02 00:27:41.211531 | orchestrator | Friday 02 January 2026 00:27:24 +0000 (0:00:01.162) 0:00:46.061 ******** 2026-01-02 00:27:41.211542 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:27:41.211552 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:27:41.211564 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:27:41.211574 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:27:41.211585 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:27:41.211596 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:27:41.211607 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:27:41.211618 | orchestrator | 2026-01-02 00:27:41.211629 | orchestrator | TASK [osism.services.rsyslog : Include logrotate tasks] ************************ 2026-01-02 00:27:41.211640 | orchestrator | Friday 02 January 2026 00:27:24 +0000 (0:00:00.220) 0:00:46.281 ******** 2026-01-02 00:27:41.211651 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/logrotate.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:27:41.211662 | orchestrator | 2026-01-02 00:27:41.211673 | orchestrator | TASK [osism.services.rsyslog : Ensure logrotate package is installed] ********** 2026-01-02 00:27:41.211684 | orchestrator | Friday 02 January 2026 00:27:24 +0000 (0:00:00.303) 0:00:46.585 ******** 2026-01-02 00:27:41.211695 | orchestrator | ok: [testbed-manager] 2026-01-02 00:27:41.211706 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:27:41.211717 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:27:41.211728 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:27:41.211739 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:27:41.211750 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:27:41.211760 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:27:41.211771 | orchestrator | 2026-01-02 00:27:41.211782 | orchestrator | TASK [osism.services.rsyslog : Configure logrotate for rsyslog] **************** 2026-01-02 00:27:41.211793 | orchestrator | Friday 02 January 2026 00:27:26 +0000 (0:00:01.807) 0:00:48.392 ******** 2026-01-02 00:27:41.211804 | orchestrator | changed: [testbed-manager] 2026-01-02 00:27:41.211815 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:27:41.211826 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:27:41.211837 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:27:41.211847 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:27:41.211858 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:27:41.211869 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:27:41.211879 | orchestrator | 2026-01-02 00:27:41.211890 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2026-01-02 00:27:41.211901 | orchestrator | Friday 02 January 2026 00:27:27 +0000 (0:00:01.129) 0:00:49.522 ******** 2026-01-02 00:27:41.211942 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:27:41.211954 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:27:41.211965 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:27:41.211976 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:27:41.211987 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:27:41.211998 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:27:41.212009 | orchestrator | changed: [testbed-manager] 2026-01-02 00:27:41.212020 | orchestrator | 2026-01-02 00:27:41.212031 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2026-01-02 00:27:41.212042 | orchestrator | Friday 02 January 2026 00:27:38 +0000 (0:00:11.078) 0:01:00.601 ******** 2026-01-02 00:27:41.212053 | orchestrator | ok: [testbed-manager] 2026-01-02 00:27:41.212063 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:27:41.212082 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:27:41.212093 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:27:41.212104 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:27:41.212115 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:27:41.212125 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:27:41.212136 | orchestrator | 2026-01-02 00:27:41.212147 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2026-01-02 00:27:41.212158 | orchestrator | Friday 02 January 2026 00:27:39 +0000 (0:00:00.992) 0:01:01.594 ******** 2026-01-02 00:27:41.212169 | orchestrator | ok: [testbed-manager] 2026-01-02 00:27:41.212180 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:27:41.212191 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:27:41.212202 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:27:41.212213 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:27:41.212223 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:27:41.212234 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:27:41.212245 | orchestrator | 2026-01-02 00:27:41.212256 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2026-01-02 00:27:41.212273 | orchestrator | Friday 02 January 2026 00:27:40 +0000 (0:00:00.887) 0:01:02.481 ******** 2026-01-02 00:27:41.212284 | orchestrator | ok: [testbed-manager] 2026-01-02 00:27:41.212295 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:27:41.212306 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:27:41.212317 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:27:41.212328 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:27:41.212339 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:27:41.212350 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:27:41.212360 | orchestrator | 2026-01-02 00:27:41.212371 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2026-01-02 00:27:41.212382 | orchestrator | Friday 02 January 2026 00:27:40 +0000 (0:00:00.207) 0:01:02.689 ******** 2026-01-02 00:27:41.212393 | orchestrator | ok: [testbed-manager] 2026-01-02 00:27:41.212404 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:27:41.212415 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:27:41.212426 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:27:41.212437 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:27:41.212448 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:27:41.212459 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:27:41.212469 | orchestrator | 2026-01-02 00:27:41.212480 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2026-01-02 00:27:41.212491 | orchestrator | Friday 02 January 2026 00:27:40 +0000 (0:00:00.204) 0:01:02.893 ******** 2026-01-02 00:27:41.212503 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:27:41.212514 | orchestrator | 2026-01-02 00:27:41.212533 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2026-01-02 00:30:00.036016 | orchestrator | Friday 02 January 2026 00:27:41 +0000 (0:00:00.248) 0:01:03.142 ******** 2026-01-02 00:30:00.036136 | orchestrator | ok: [testbed-manager] 2026-01-02 00:30:00.036154 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:30:00.036167 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:30:00.036181 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:30:00.036202 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:30:00.036220 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:30:00.036238 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:30:00.036257 | orchestrator | 2026-01-02 00:30:00.036308 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2026-01-02 00:30:00.036329 | orchestrator | Friday 02 January 2026 00:27:42 +0000 (0:00:01.749) 0:01:04.891 ******** 2026-01-02 00:30:00.036347 | orchestrator | changed: [testbed-manager] 2026-01-02 00:30:00.036368 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:30:00.036387 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:30:00.036407 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:30:00.036449 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:30:00.036461 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:30:00.036472 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:30:00.036483 | orchestrator | 2026-01-02 00:30:00.036494 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2026-01-02 00:30:00.036507 | orchestrator | Friday 02 January 2026 00:27:43 +0000 (0:00:00.693) 0:01:05.585 ******** 2026-01-02 00:30:00.036518 | orchestrator | ok: [testbed-manager] 2026-01-02 00:30:00.036529 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:30:00.036540 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:30:00.036551 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:30:00.036562 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:30:00.036572 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:30:00.036583 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:30:00.036594 | orchestrator | 2026-01-02 00:30:00.036605 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2026-01-02 00:30:00.036617 | orchestrator | Friday 02 January 2026 00:27:43 +0000 (0:00:00.246) 0:01:05.831 ******** 2026-01-02 00:30:00.036627 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:30:00.036638 | orchestrator | ok: [testbed-manager] 2026-01-02 00:30:00.036649 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:30:00.036660 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:30:00.036671 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:30:00.036682 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:30:00.036693 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:30:00.036704 | orchestrator | 2026-01-02 00:30:00.036715 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2026-01-02 00:30:00.036726 | orchestrator | Friday 02 January 2026 00:27:45 +0000 (0:00:01.207) 0:01:07.039 ******** 2026-01-02 00:30:00.036737 | orchestrator | changed: [testbed-manager] 2026-01-02 00:30:00.036748 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:30:00.036759 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:30:00.036770 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:30:00.036781 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:30:00.036792 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:30:00.036831 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:30:00.036843 | orchestrator | 2026-01-02 00:30:00.036854 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2026-01-02 00:30:00.036865 | orchestrator | Friday 02 January 2026 00:27:46 +0000 (0:00:01.731) 0:01:08.771 ******** 2026-01-02 00:30:00.036876 | orchestrator | ok: [testbed-manager] 2026-01-02 00:30:00.036887 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:30:00.036898 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:30:00.036909 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:30:00.036920 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:30:00.036931 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:30:00.036942 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:30:00.036953 | orchestrator | 2026-01-02 00:30:00.036964 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2026-01-02 00:30:00.036975 | orchestrator | Friday 02 January 2026 00:27:49 +0000 (0:00:02.659) 0:01:11.430 ******** 2026-01-02 00:30:00.036986 | orchestrator | ok: [testbed-manager] 2026-01-02 00:30:00.036997 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:30:00.037008 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:30:00.037019 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:30:00.037030 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:30:00.037041 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:30:00.037052 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:30:00.037063 | orchestrator | 2026-01-02 00:30:00.037074 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2026-01-02 00:30:00.037086 | orchestrator | Friday 02 January 2026 00:28:25 +0000 (0:00:36.116) 0:01:47.547 ******** 2026-01-02 00:30:00.037097 | orchestrator | changed: [testbed-manager] 2026-01-02 00:30:00.037108 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:30:00.037136 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:30:00.037148 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:30:00.037166 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:30:00.037177 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:30:00.037188 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:30:00.037199 | orchestrator | 2026-01-02 00:30:00.037210 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2026-01-02 00:30:00.037225 | orchestrator | Friday 02 January 2026 00:29:46 +0000 (0:01:20.570) 0:03:08.117 ******** 2026-01-02 00:30:00.037244 | orchestrator | ok: [testbed-manager] 2026-01-02 00:30:00.037264 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:30:00.037283 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:30:00.037301 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:30:00.037319 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:30:00.037337 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:30:00.037353 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:30:00.037369 | orchestrator | 2026-01-02 00:30:00.037385 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2026-01-02 00:30:00.037402 | orchestrator | Friday 02 January 2026 00:29:48 +0000 (0:00:01.945) 0:03:10.063 ******** 2026-01-02 00:30:00.037419 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:30:00.037436 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:30:00.037451 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:30:00.037466 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:30:00.037482 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:30:00.037497 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:30:00.037513 | orchestrator | changed: [testbed-manager] 2026-01-02 00:30:00.037528 | orchestrator | 2026-01-02 00:30:00.037544 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2026-01-02 00:30:00.037561 | orchestrator | Friday 02 January 2026 00:29:57 +0000 (0:00:09.687) 0:03:19.750 ******** 2026-01-02 00:30:00.037624 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2026-01-02 00:30:00.037654 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2026-01-02 00:30:00.037680 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2026-01-02 00:30:00.037699 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-01-02 00:30:00.037716 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'network', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-01-02 00:30:00.037747 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2026-01-02 00:30:00.037766 | orchestrator | 2026-01-02 00:30:00.037782 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2026-01-02 00:30:00.037835 | orchestrator | Friday 02 January 2026 00:29:58 +0000 (0:00:00.394) 0:03:20.144 ******** 2026-01-02 00:30:00.037853 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-01-02 00:30:00.037870 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-01-02 00:30:00.037888 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:30:00.037905 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-01-02 00:30:00.037924 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:30:00.037940 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-01-02 00:30:00.037957 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:30:00.037974 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:30:00.037991 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-01-02 00:30:00.038127 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-01-02 00:30:00.038148 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-01-02 00:30:00.038165 | orchestrator | 2026-01-02 00:30:00.038182 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2026-01-02 00:30:00.038201 | orchestrator | Friday 02 January 2026 00:29:59 +0000 (0:00:01.762) 0:03:21.907 ******** 2026-01-02 00:30:00.038220 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-01-02 00:30:00.038238 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-01-02 00:30:00.038256 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-01-02 00:30:00.038275 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-01-02 00:30:00.038307 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-01-02 00:30:00.038346 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-01-02 00:30:06.008214 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-01-02 00:30:06.008297 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-01-02 00:30:06.008304 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-01-02 00:30:06.008309 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-01-02 00:30:06.008318 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-01-02 00:30:06.008324 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-01-02 00:30:06.008331 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-01-02 00:30:06.008337 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-01-02 00:30:06.008344 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-01-02 00:30:06.008350 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-01-02 00:30:06.008357 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-01-02 00:30:06.008385 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:30:06.008394 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-01-02 00:30:06.008401 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-01-02 00:30:06.008408 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-01-02 00:30:06.008415 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-01-02 00:30:06.008421 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-01-02 00:30:06.008428 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-01-02 00:30:06.008435 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-01-02 00:30:06.008442 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-01-02 00:30:06.008448 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-01-02 00:30:06.008455 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-01-02 00:30:06.008461 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-01-02 00:30:06.008468 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-01-02 00:30:06.008475 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-01-02 00:30:06.008479 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-01-02 00:30:06.008482 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-01-02 00:30:06.008486 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-01-02 00:30:06.008490 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-01-02 00:30:06.008505 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-01-02 00:30:06.008509 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:30:06.008513 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-01-02 00:30:06.008517 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-01-02 00:30:06.008521 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-01-02 00:30:06.008525 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-01-02 00:30:06.008529 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-01-02 00:30:06.008533 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:30:06.008537 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:30:06.008540 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-01-02 00:30:06.008545 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-01-02 00:30:06.008551 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-01-02 00:30:06.008557 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-01-02 00:30:06.008562 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-01-02 00:30:06.008582 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-01-02 00:30:06.008589 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-01-02 00:30:06.008601 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-01-02 00:30:06.008608 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-01-02 00:30:06.008613 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-01-02 00:30:06.008619 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-01-02 00:30:06.008624 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-01-02 00:30:06.008630 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-01-02 00:30:06.008635 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-01-02 00:30:06.008641 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-01-02 00:30:06.008647 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-01-02 00:30:06.008654 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-01-02 00:30:06.008660 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-01-02 00:30:06.008666 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-01-02 00:30:06.008673 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-01-02 00:30:06.008679 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-01-02 00:30:06.008685 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-01-02 00:30:06.008691 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-01-02 00:30:06.008696 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-01-02 00:30:06.008704 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-01-02 00:30:06.008708 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-01-02 00:30:06.008711 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-01-02 00:30:06.008715 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-01-02 00:30:06.008719 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-01-02 00:30:06.008723 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-01-02 00:30:06.008727 | orchestrator | 2026-01-02 00:30:06.008732 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2026-01-02 00:30:06.008735 | orchestrator | Friday 02 January 2026 00:30:04 +0000 (0:00:04.905) 0:03:26.812 ******** 2026-01-02 00:30:06.008739 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-01-02 00:30:06.008744 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-01-02 00:30:06.008747 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-01-02 00:30:06.008751 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-01-02 00:30:06.008759 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-01-02 00:30:06.008762 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-01-02 00:30:06.008766 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-01-02 00:30:06.008770 | orchestrator | 2026-01-02 00:30:06.008774 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2026-01-02 00:30:06.008782 | orchestrator | Friday 02 January 2026 00:30:05 +0000 (0:00:00.631) 0:03:27.444 ******** 2026-01-02 00:30:06.008787 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-01-02 00:30:06.008818 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:30:06.008826 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-01-02 00:30:06.008833 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:30:06.008839 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-01-02 00:30:06.008846 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:30:06.008855 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-01-02 00:30:06.008863 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:30:06.008869 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-01-02 00:30:06.008876 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-01-02 00:30:06.008889 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-01-02 00:30:20.439726 | orchestrator | 2026-01-02 00:30:20.439861 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on network] ***************** 2026-01-02 00:30:20.439874 | orchestrator | Friday 02 January 2026 00:30:05 +0000 (0:00:00.497) 0:03:27.941 ******** 2026-01-02 00:30:20.439883 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-01-02 00:30:20.439892 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-01-02 00:30:20.439900 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:30:20.439910 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-01-02 00:30:20.439917 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:30:20.439925 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-01-02 00:30:20.439933 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:30:20.439940 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:30:20.439948 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-01-02 00:30:20.439955 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-01-02 00:30:20.439963 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-01-02 00:30:20.439970 | orchestrator | 2026-01-02 00:30:20.439978 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2026-01-02 00:30:20.439985 | orchestrator | Friday 02 January 2026 00:30:06 +0000 (0:00:00.592) 0:03:28.534 ******** 2026-01-02 00:30:20.439993 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-01-02 00:30:20.440000 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:30:20.440008 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-01-02 00:30:20.440015 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-01-02 00:30:20.440022 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:30:20.440030 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:30:20.440037 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-01-02 00:30:20.440044 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:30:20.440052 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-01-02 00:30:20.440078 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-01-02 00:30:20.440086 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-01-02 00:30:20.440093 | orchestrator | 2026-01-02 00:30:20.440100 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2026-01-02 00:30:20.440108 | orchestrator | Friday 02 January 2026 00:30:08 +0000 (0:00:01.589) 0:03:30.124 ******** 2026-01-02 00:30:20.440115 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:30:20.440122 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:30:20.440130 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:30:20.440137 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:30:20.440146 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:30:20.440153 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:30:20.440160 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:30:20.440168 | orchestrator | 2026-01-02 00:30:20.440175 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2026-01-02 00:30:20.440182 | orchestrator | Friday 02 January 2026 00:30:08 +0000 (0:00:00.260) 0:03:30.384 ******** 2026-01-02 00:30:20.440190 | orchestrator | ok: [testbed-manager] 2026-01-02 00:30:20.440198 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:30:20.440206 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:30:20.440213 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:30:20.440220 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:30:20.440228 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:30:20.440235 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:30:20.440242 | orchestrator | 2026-01-02 00:30:20.440250 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2026-01-02 00:30:20.440257 | orchestrator | Friday 02 January 2026 00:30:14 +0000 (0:00:06.020) 0:03:36.405 ******** 2026-01-02 00:30:20.440265 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2026-01-02 00:30:20.440272 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:30:20.440281 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2026-01-02 00:30:20.440290 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2026-01-02 00:30:20.440299 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:30:20.440308 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2026-01-02 00:30:20.440317 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:30:20.440325 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2026-01-02 00:30:20.440334 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:30:20.440343 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2026-01-02 00:30:20.440351 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:30:20.440359 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:30:20.440368 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2026-01-02 00:30:20.440376 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:30:20.440385 | orchestrator | 2026-01-02 00:30:20.440394 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2026-01-02 00:30:20.440403 | orchestrator | Friday 02 January 2026 00:30:14 +0000 (0:00:00.275) 0:03:36.680 ******** 2026-01-02 00:30:20.440411 | orchestrator | ok: [testbed-manager] => (item=cron) 2026-01-02 00:30:20.440421 | orchestrator | ok: [testbed-node-4] => (item=cron) 2026-01-02 00:30:20.440430 | orchestrator | ok: [testbed-node-3] => (item=cron) 2026-01-02 00:30:20.440452 | orchestrator | ok: [testbed-node-5] => (item=cron) 2026-01-02 00:30:20.440461 | orchestrator | ok: [testbed-node-2] => (item=cron) 2026-01-02 00:30:20.440470 | orchestrator | ok: [testbed-node-0] => (item=cron) 2026-01-02 00:30:20.440479 | orchestrator | ok: [testbed-node-1] => (item=cron) 2026-01-02 00:30:20.440487 | orchestrator | 2026-01-02 00:30:20.440496 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2026-01-02 00:30:20.440505 | orchestrator | Friday 02 January 2026 00:30:15 +0000 (0:00:01.075) 0:03:37.756 ******** 2026-01-02 00:30:20.440515 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:30:20.440531 | orchestrator | 2026-01-02 00:30:20.440539 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2026-01-02 00:30:20.440548 | orchestrator | Friday 02 January 2026 00:30:16 +0000 (0:00:00.472) 0:03:38.228 ******** 2026-01-02 00:30:20.440556 | orchestrator | ok: [testbed-manager] 2026-01-02 00:30:20.440565 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:30:20.440573 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:30:20.440583 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:30:20.440591 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:30:20.440600 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:30:20.440608 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:30:20.440615 | orchestrator | 2026-01-02 00:30:20.440622 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2026-01-02 00:30:20.440630 | orchestrator | Friday 02 January 2026 00:30:17 +0000 (0:00:01.367) 0:03:39.595 ******** 2026-01-02 00:30:20.440637 | orchestrator | ok: [testbed-manager] 2026-01-02 00:30:20.440645 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:30:20.440652 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:30:20.440659 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:30:20.440667 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:30:20.440674 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:30:20.440681 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:30:20.440689 | orchestrator | 2026-01-02 00:30:20.440696 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2026-01-02 00:30:20.440703 | orchestrator | Friday 02 January 2026 00:30:18 +0000 (0:00:00.613) 0:03:40.209 ******** 2026-01-02 00:30:20.440711 | orchestrator | changed: [testbed-manager] 2026-01-02 00:30:20.440718 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:30:20.440725 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:30:20.440733 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:30:20.440740 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:30:20.440747 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:30:20.440755 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:30:20.440762 | orchestrator | 2026-01-02 00:30:20.440770 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2026-01-02 00:30:20.440777 | orchestrator | Friday 02 January 2026 00:30:18 +0000 (0:00:00.631) 0:03:40.840 ******** 2026-01-02 00:30:20.440799 | orchestrator | ok: [testbed-manager] 2026-01-02 00:30:20.440806 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:30:20.440814 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:30:20.440821 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:30:20.440828 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:30:20.440836 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:30:20.440843 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:30:20.440850 | orchestrator | 2026-01-02 00:30:20.440857 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2026-01-02 00:30:20.440864 | orchestrator | Friday 02 January 2026 00:30:19 +0000 (0:00:00.583) 0:03:41.423 ******** 2026-01-02 00:30:20.440893 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1767312321.241981, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-02 00:30:20.440903 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1767312348.9636488, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-02 00:30:20.440917 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1767312348.4421883, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-02 00:30:20.440941 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1767312343.7286732, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-02 00:30:25.157991 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1767312342.5726779, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-02 00:30:25.158238 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1767312340.583519, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-02 00:30:25.158267 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1767312359.4217572, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-02 00:30:25.158289 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-02 00:30:25.158332 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-02 00:30:25.158385 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-02 00:30:25.158406 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-02 00:30:25.158464 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-02 00:30:25.158489 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-02 00:30:25.158509 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-02 00:30:25.158530 | orchestrator | 2026-01-02 00:30:25.158551 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2026-01-02 00:30:25.158573 | orchestrator | Friday 02 January 2026 00:30:20 +0000 (0:00:00.946) 0:03:42.369 ******** 2026-01-02 00:30:25.158592 | orchestrator | changed: [testbed-manager] 2026-01-02 00:30:25.158615 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:30:25.158635 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:30:25.158653 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:30:25.158672 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:30:25.158691 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:30:25.158711 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:30:25.158730 | orchestrator | 2026-01-02 00:30:25.158748 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2026-01-02 00:30:25.158809 | orchestrator | Friday 02 January 2026 00:30:21 +0000 (0:00:01.119) 0:03:43.489 ******** 2026-01-02 00:30:25.158831 | orchestrator | changed: [testbed-manager] 2026-01-02 00:30:25.158849 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:30:25.158869 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:30:25.158888 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:30:25.158907 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:30:25.158926 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:30:25.158953 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:30:25.158974 | orchestrator | 2026-01-02 00:30:25.158993 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2026-01-02 00:30:25.159012 | orchestrator | Friday 02 January 2026 00:30:22 +0000 (0:00:01.167) 0:03:44.657 ******** 2026-01-02 00:30:25.159029 | orchestrator | changed: [testbed-manager] 2026-01-02 00:30:25.159048 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:30:25.159065 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:30:25.159085 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:30:25.159104 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:30:25.159123 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:30:25.159142 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:30:25.159160 | orchestrator | 2026-01-02 00:30:25.159179 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2026-01-02 00:30:25.159197 | orchestrator | Friday 02 January 2026 00:30:23 +0000 (0:00:01.086) 0:03:45.743 ******** 2026-01-02 00:30:25.159216 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:30:25.159234 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:30:25.159254 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:30:25.159273 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:30:25.159292 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:30:25.159310 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:30:25.159329 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:30:25.159347 | orchestrator | 2026-01-02 00:30:25.159364 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2026-01-02 00:30:25.159383 | orchestrator | Friday 02 January 2026 00:30:24 +0000 (0:00:00.253) 0:03:45.996 ******** 2026-01-02 00:30:25.159401 | orchestrator | ok: [testbed-manager] 2026-01-02 00:30:25.159422 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:30:25.159441 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:30:25.159460 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:30:25.159478 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:30:25.159496 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:30:25.159515 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:30:25.159533 | orchestrator | 2026-01-02 00:30:25.159552 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2026-01-02 00:30:25.159571 | orchestrator | Friday 02 January 2026 00:30:24 +0000 (0:00:00.713) 0:03:46.710 ******** 2026-01-02 00:30:25.159592 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:30:25.159614 | orchestrator | 2026-01-02 00:30:25.159632 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2026-01-02 00:30:25.159668 | orchestrator | Friday 02 January 2026 00:30:25 +0000 (0:00:00.382) 0:03:47.092 ******** 2026-01-02 00:31:47.153511 | orchestrator | ok: [testbed-manager] 2026-01-02 00:31:47.153624 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:31:47.153641 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:31:47.153652 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:31:47.153663 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:31:47.153673 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:31:47.153683 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:31:47.153701 | orchestrator | 2026-01-02 00:31:47.153748 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2026-01-02 00:31:47.153767 | orchestrator | Friday 02 January 2026 00:30:33 +0000 (0:00:08.593) 0:03:55.686 ******** 2026-01-02 00:31:47.153805 | orchestrator | ok: [testbed-manager] 2026-01-02 00:31:47.153817 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:31:47.153827 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:31:47.153837 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:31:47.153847 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:31:47.153856 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:31:47.153867 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:31:47.153876 | orchestrator | 2026-01-02 00:31:47.153886 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2026-01-02 00:31:47.153897 | orchestrator | Friday 02 January 2026 00:30:35 +0000 (0:00:01.292) 0:03:56.978 ******** 2026-01-02 00:31:47.153906 | orchestrator | ok: [testbed-manager] 2026-01-02 00:31:47.153916 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:31:47.153925 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:31:47.153935 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:31:47.153944 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:31:47.153954 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:31:47.153964 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:31:47.153973 | orchestrator | 2026-01-02 00:31:47.153983 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2026-01-02 00:31:47.153993 | orchestrator | Friday 02 January 2026 00:30:36 +0000 (0:00:01.071) 0:03:58.050 ******** 2026-01-02 00:31:47.154002 | orchestrator | ok: [testbed-manager] 2026-01-02 00:31:47.154012 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:31:47.154076 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:31:47.154088 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:31:47.154099 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:31:47.154112 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:31:47.154122 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:31:47.154132 | orchestrator | 2026-01-02 00:31:47.154142 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2026-01-02 00:31:47.154153 | orchestrator | Friday 02 January 2026 00:30:36 +0000 (0:00:00.261) 0:03:58.311 ******** 2026-01-02 00:31:47.154163 | orchestrator | ok: [testbed-manager] 2026-01-02 00:31:47.154173 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:31:47.154182 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:31:47.154192 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:31:47.154202 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:31:47.154212 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:31:47.154221 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:31:47.154231 | orchestrator | 2026-01-02 00:31:47.154241 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2026-01-02 00:31:47.154251 | orchestrator | Friday 02 January 2026 00:30:36 +0000 (0:00:00.284) 0:03:58.596 ******** 2026-01-02 00:31:47.154260 | orchestrator | ok: [testbed-manager] 2026-01-02 00:31:47.154270 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:31:47.154280 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:31:47.154289 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:31:47.154299 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:31:47.154381 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:31:47.154392 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:31:47.154402 | orchestrator | 2026-01-02 00:31:47.154426 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2026-01-02 00:31:47.154436 | orchestrator | Friday 02 January 2026 00:30:36 +0000 (0:00:00.245) 0:03:58.842 ******** 2026-01-02 00:31:47.154446 | orchestrator | ok: [testbed-manager] 2026-01-02 00:31:47.154455 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:31:47.154465 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:31:47.154476 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:31:47.154485 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:31:47.154503 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:31:47.154519 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:31:47.154533 | orchestrator | 2026-01-02 00:31:47.154548 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2026-01-02 00:31:47.154562 | orchestrator | Friday 02 January 2026 00:30:42 +0000 (0:00:05.849) 0:04:04.691 ******** 2026-01-02 00:31:47.154593 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:31:47.154610 | orchestrator | 2026-01-02 00:31:47.154625 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2026-01-02 00:31:47.154642 | orchestrator | Friday 02 January 2026 00:30:43 +0000 (0:00:00.368) 0:04:05.060 ******** 2026-01-02 00:31:47.154657 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2026-01-02 00:31:47.154672 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2026-01-02 00:31:47.154689 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2026-01-02 00:31:47.154704 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2026-01-02 00:31:47.154760 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:31:47.154776 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2026-01-02 00:31:47.154790 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:31:47.154806 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2026-01-02 00:31:47.154824 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2026-01-02 00:31:47.154840 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2026-01-02 00:31:47.154856 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:31:47.154867 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2026-01-02 00:31:47.154877 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2026-01-02 00:31:47.154886 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:31:47.154896 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2026-01-02 00:31:47.154906 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2026-01-02 00:31:47.154935 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:31:47.154945 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:31:47.154955 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2026-01-02 00:31:47.154964 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2026-01-02 00:31:47.154974 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:31:47.154984 | orchestrator | 2026-01-02 00:31:47.154994 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2026-01-02 00:31:47.155004 | orchestrator | Friday 02 January 2026 00:30:43 +0000 (0:00:00.319) 0:04:05.379 ******** 2026-01-02 00:31:47.155032 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:31:47.155043 | orchestrator | 2026-01-02 00:31:47.155053 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2026-01-02 00:31:47.155063 | orchestrator | Friday 02 January 2026 00:30:43 +0000 (0:00:00.369) 0:04:05.749 ******** 2026-01-02 00:31:47.155073 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2026-01-02 00:31:47.155082 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2026-01-02 00:31:47.155092 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:31:47.155102 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2026-01-02 00:31:47.155111 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:31:47.155121 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:31:47.155130 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2026-01-02 00:31:47.155140 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2026-01-02 00:31:47.155149 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:31:47.155159 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2026-01-02 00:31:47.155169 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:31:47.155178 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:31:47.155197 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2026-01-02 00:31:47.155207 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:31:47.155216 | orchestrator | 2026-01-02 00:31:47.155226 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2026-01-02 00:31:47.155235 | orchestrator | Friday 02 January 2026 00:30:44 +0000 (0:00:00.262) 0:04:06.012 ******** 2026-01-02 00:31:47.155245 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:31:47.155254 | orchestrator | 2026-01-02 00:31:47.155264 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2026-01-02 00:31:47.155274 | orchestrator | Friday 02 January 2026 00:30:44 +0000 (0:00:00.362) 0:04:06.374 ******** 2026-01-02 00:31:47.155283 | orchestrator | changed: [testbed-manager] 2026-01-02 00:31:47.155293 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:31:47.155303 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:31:47.155313 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:31:47.155323 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:31:47.155332 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:31:47.155342 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:31:47.155351 | orchestrator | 2026-01-02 00:31:47.155361 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2026-01-02 00:31:47.155371 | orchestrator | Friday 02 January 2026 00:31:20 +0000 (0:00:36.234) 0:04:42.609 ******** 2026-01-02 00:31:47.155380 | orchestrator | changed: [testbed-manager] 2026-01-02 00:31:47.155390 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:31:47.155399 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:31:47.155409 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:31:47.155419 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:31:47.155439 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:31:47.155449 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:31:47.155458 | orchestrator | 2026-01-02 00:31:47.155468 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2026-01-02 00:31:47.155478 | orchestrator | Friday 02 January 2026 00:31:29 +0000 (0:00:09.018) 0:04:51.628 ******** 2026-01-02 00:31:47.155487 | orchestrator | changed: [testbed-manager] 2026-01-02 00:31:47.155497 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:31:47.155507 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:31:47.155517 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:31:47.155526 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:31:47.155536 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:31:47.155545 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:31:47.155555 | orchestrator | 2026-01-02 00:31:47.155565 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2026-01-02 00:31:47.155574 | orchestrator | Friday 02 January 2026 00:31:38 +0000 (0:00:08.387) 0:05:00.015 ******** 2026-01-02 00:31:47.155584 | orchestrator | ok: [testbed-manager] 2026-01-02 00:31:47.155594 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:31:47.155603 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:31:47.155613 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:31:47.155622 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:31:47.155632 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:31:47.155641 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:31:47.155651 | orchestrator | 2026-01-02 00:31:47.155661 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2026-01-02 00:31:47.155671 | orchestrator | Friday 02 January 2026 00:31:40 +0000 (0:00:01.954) 0:05:01.970 ******** 2026-01-02 00:31:47.155680 | orchestrator | changed: [testbed-manager] 2026-01-02 00:31:47.155690 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:31:47.155728 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:31:47.155738 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:31:47.155748 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:31:47.155763 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:31:47.155773 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:31:47.155783 | orchestrator | 2026-01-02 00:31:47.155800 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2026-01-02 00:31:58.136927 | orchestrator | Friday 02 January 2026 00:31:47 +0000 (0:00:07.107) 0:05:09.078 ******** 2026-01-02 00:31:58.137021 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:31:58.137033 | orchestrator | 2026-01-02 00:31:58.137040 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2026-01-02 00:31:58.137047 | orchestrator | Friday 02 January 2026 00:31:47 +0000 (0:00:00.474) 0:05:09.552 ******** 2026-01-02 00:31:58.137054 | orchestrator | changed: [testbed-manager] 2026-01-02 00:31:58.137074 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:31:58.137081 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:31:58.137087 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:31:58.137093 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:31:58.137099 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:31:58.137105 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:31:58.137111 | orchestrator | 2026-01-02 00:31:58.137125 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2026-01-02 00:31:58.137131 | orchestrator | Friday 02 January 2026 00:31:48 +0000 (0:00:00.765) 0:05:10.318 ******** 2026-01-02 00:31:58.137137 | orchestrator | ok: [testbed-manager] 2026-01-02 00:31:58.137144 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:31:58.137150 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:31:58.137156 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:31:58.137162 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:31:58.137167 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:31:58.137174 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:31:58.137180 | orchestrator | 2026-01-02 00:31:58.137185 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2026-01-02 00:31:58.137191 | orchestrator | Friday 02 January 2026 00:31:50 +0000 (0:00:01.962) 0:05:12.280 ******** 2026-01-02 00:31:58.137197 | orchestrator | changed: [testbed-manager] 2026-01-02 00:31:58.137203 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:31:58.137209 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:31:58.137215 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:31:58.137221 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:31:58.137226 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:31:58.137232 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:31:58.137238 | orchestrator | 2026-01-02 00:31:58.137244 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2026-01-02 00:31:58.137250 | orchestrator | Friday 02 January 2026 00:31:51 +0000 (0:00:00.773) 0:05:13.054 ******** 2026-01-02 00:31:58.137256 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:31:58.137262 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:31:58.137268 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:31:58.137273 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:31:58.137279 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:31:58.137285 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:31:58.137291 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:31:58.137297 | orchestrator | 2026-01-02 00:31:58.137303 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2026-01-02 00:31:58.137309 | orchestrator | Friday 02 January 2026 00:31:51 +0000 (0:00:00.256) 0:05:13.311 ******** 2026-01-02 00:31:58.137315 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:31:58.137321 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:31:58.137338 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:31:58.137344 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:31:58.137350 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:31:58.137374 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:31:58.137381 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:31:58.137387 | orchestrator | 2026-01-02 00:31:58.137392 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2026-01-02 00:31:58.137398 | orchestrator | Friday 02 January 2026 00:31:51 +0000 (0:00:00.369) 0:05:13.681 ******** 2026-01-02 00:31:58.137404 | orchestrator | ok: [testbed-manager] 2026-01-02 00:31:58.137410 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:31:58.137416 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:31:58.137422 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:31:58.137428 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:31:58.137434 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:31:58.137439 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:31:58.137445 | orchestrator | 2026-01-02 00:31:58.137451 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2026-01-02 00:31:58.137457 | orchestrator | Friday 02 January 2026 00:31:52 +0000 (0:00:00.281) 0:05:13.962 ******** 2026-01-02 00:31:58.137463 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:31:58.137469 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:31:58.137476 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:31:58.137483 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:31:58.137490 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:31:58.137497 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:31:58.137504 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:31:58.137510 | orchestrator | 2026-01-02 00:31:58.137517 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2026-01-02 00:31:58.137525 | orchestrator | Friday 02 January 2026 00:31:52 +0000 (0:00:00.280) 0:05:14.243 ******** 2026-01-02 00:31:58.137531 | orchestrator | ok: [testbed-manager] 2026-01-02 00:31:58.137538 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:31:58.137545 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:31:58.137551 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:31:58.137558 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:31:58.137565 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:31:58.137572 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:31:58.137578 | orchestrator | 2026-01-02 00:31:58.137585 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2026-01-02 00:31:58.137592 | orchestrator | Friday 02 January 2026 00:31:52 +0000 (0:00:00.269) 0:05:14.512 ******** 2026-01-02 00:31:58.137599 | orchestrator | ok: [testbed-manager] =>  2026-01-02 00:31:58.137606 | orchestrator |  docker_version: 5:27.5.1 2026-01-02 00:31:58.137613 | orchestrator | ok: [testbed-node-3] =>  2026-01-02 00:31:58.137620 | orchestrator |  docker_version: 5:27.5.1 2026-01-02 00:31:58.137627 | orchestrator | ok: [testbed-node-4] =>  2026-01-02 00:31:58.137634 | orchestrator |  docker_version: 5:27.5.1 2026-01-02 00:31:58.137640 | orchestrator | ok: [testbed-node-5] =>  2026-01-02 00:31:58.137646 | orchestrator |  docker_version: 5:27.5.1 2026-01-02 00:31:58.137664 | orchestrator | ok: [testbed-node-0] =>  2026-01-02 00:31:58.137670 | orchestrator |  docker_version: 5:27.5.1 2026-01-02 00:31:58.137676 | orchestrator | ok: [testbed-node-1] =>  2026-01-02 00:31:58.137682 | orchestrator |  docker_version: 5:27.5.1 2026-01-02 00:31:58.137688 | orchestrator | ok: [testbed-node-2] =>  2026-01-02 00:31:58.137694 | orchestrator |  docker_version: 5:27.5.1 2026-01-02 00:31:58.137737 | orchestrator | 2026-01-02 00:31:58.137747 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2026-01-02 00:31:58.137758 | orchestrator | Friday 02 January 2026 00:31:52 +0000 (0:00:00.262) 0:05:14.774 ******** 2026-01-02 00:31:58.137767 | orchestrator | ok: [testbed-manager] =>  2026-01-02 00:31:58.137776 | orchestrator |  docker_cli_version: 5:27.5.1 2026-01-02 00:31:58.137782 | orchestrator | ok: [testbed-node-3] =>  2026-01-02 00:31:58.137788 | orchestrator |  docker_cli_version: 5:27.5.1 2026-01-02 00:31:58.137794 | orchestrator | ok: [testbed-node-4] =>  2026-01-02 00:31:58.137799 | orchestrator |  docker_cli_version: 5:27.5.1 2026-01-02 00:31:58.137805 | orchestrator | ok: [testbed-node-5] =>  2026-01-02 00:31:58.137817 | orchestrator |  docker_cli_version: 5:27.5.1 2026-01-02 00:31:58.137823 | orchestrator | ok: [testbed-node-0] =>  2026-01-02 00:31:58.137828 | orchestrator |  docker_cli_version: 5:27.5.1 2026-01-02 00:31:58.137834 | orchestrator | ok: [testbed-node-1] =>  2026-01-02 00:31:58.137840 | orchestrator |  docker_cli_version: 5:27.5.1 2026-01-02 00:31:58.137845 | orchestrator | ok: [testbed-node-2] =>  2026-01-02 00:31:58.137851 | orchestrator |  docker_cli_version: 5:27.5.1 2026-01-02 00:31:58.137857 | orchestrator | 2026-01-02 00:31:58.137863 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2026-01-02 00:31:58.137868 | orchestrator | Friday 02 January 2026 00:31:53 +0000 (0:00:00.261) 0:05:15.035 ******** 2026-01-02 00:31:58.137874 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:31:58.137880 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:31:58.137886 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:31:58.137891 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:31:58.137897 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:31:58.137903 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:31:58.137909 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:31:58.137914 | orchestrator | 2026-01-02 00:31:58.137920 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2026-01-02 00:31:58.137926 | orchestrator | Friday 02 January 2026 00:31:53 +0000 (0:00:00.229) 0:05:15.265 ******** 2026-01-02 00:31:58.137932 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:31:58.137937 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:31:58.137943 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:31:58.137949 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:31:58.137954 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:31:58.137960 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:31:58.137966 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:31:58.137972 | orchestrator | 2026-01-02 00:31:58.137977 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2026-01-02 00:31:58.137983 | orchestrator | Friday 02 January 2026 00:31:53 +0000 (0:00:00.249) 0:05:15.514 ******** 2026-01-02 00:31:58.137991 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:31:58.137998 | orchestrator | 2026-01-02 00:31:58.138008 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2026-01-02 00:31:58.138014 | orchestrator | Friday 02 January 2026 00:31:53 +0000 (0:00:00.408) 0:05:15.923 ******** 2026-01-02 00:31:58.138064 | orchestrator | ok: [testbed-manager] 2026-01-02 00:31:58.138071 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:31:58.138077 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:31:58.138083 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:31:58.138088 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:31:58.138094 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:31:58.138100 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:31:58.138106 | orchestrator | 2026-01-02 00:31:58.138112 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2026-01-02 00:31:58.138118 | orchestrator | Friday 02 January 2026 00:31:54 +0000 (0:00:00.979) 0:05:16.902 ******** 2026-01-02 00:31:58.138124 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:31:58.138129 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:31:58.138135 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:31:58.138141 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:31:58.138147 | orchestrator | ok: [testbed-manager] 2026-01-02 00:31:58.138153 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:31:58.138158 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:31:58.138164 | orchestrator | 2026-01-02 00:31:58.138170 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2026-01-02 00:31:58.138177 | orchestrator | Friday 02 January 2026 00:31:57 +0000 (0:00:02.793) 0:05:19.696 ******** 2026-01-02 00:31:58.138188 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2026-01-02 00:31:58.138194 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2026-01-02 00:31:58.138200 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2026-01-02 00:31:58.138206 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2026-01-02 00:31:58.138212 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2026-01-02 00:31:58.138218 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2026-01-02 00:31:58.138224 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:31:58.138229 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2026-01-02 00:31:58.138235 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2026-01-02 00:31:58.138241 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2026-01-02 00:31:58.138247 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:31:58.138253 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2026-01-02 00:31:58.138258 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2026-01-02 00:31:58.138264 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2026-01-02 00:31:58.138270 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:31:58.138276 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2026-01-02 00:31:58.138287 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2026-01-02 00:33:01.787816 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2026-01-02 00:33:01.787937 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:33:01.787955 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2026-01-02 00:33:01.787968 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2026-01-02 00:33:01.787979 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2026-01-02 00:33:01.787991 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:33:01.788002 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:33:01.788013 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2026-01-02 00:33:01.788024 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2026-01-02 00:33:01.788035 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2026-01-02 00:33:01.788046 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:33:01.788058 | orchestrator | 2026-01-02 00:33:01.788071 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2026-01-02 00:33:01.788084 | orchestrator | Friday 02 January 2026 00:31:58 +0000 (0:00:00.568) 0:05:20.264 ******** 2026-01-02 00:33:01.788095 | orchestrator | ok: [testbed-manager] 2026-01-02 00:33:01.788107 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:33:01.788118 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:33:01.788129 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:33:01.788140 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:33:01.788151 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:33:01.788162 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:33:01.788173 | orchestrator | 2026-01-02 00:33:01.788184 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2026-01-02 00:33:01.788195 | orchestrator | Friday 02 January 2026 00:32:05 +0000 (0:00:07.179) 0:05:27.443 ******** 2026-01-02 00:33:01.788206 | orchestrator | ok: [testbed-manager] 2026-01-02 00:33:01.788217 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:33:01.788228 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:33:01.788239 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:33:01.788250 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:33:01.788261 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:33:01.788272 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:33:01.788283 | orchestrator | 2026-01-02 00:33:01.788294 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2026-01-02 00:33:01.788305 | orchestrator | Friday 02 January 2026 00:32:06 +0000 (0:00:01.038) 0:05:28.482 ******** 2026-01-02 00:33:01.788316 | orchestrator | ok: [testbed-manager] 2026-01-02 00:33:01.788327 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:33:01.788360 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:33:01.788374 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:33:01.788388 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:33:01.788401 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:33:01.788415 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:33:01.788427 | orchestrator | 2026-01-02 00:33:01.788440 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2026-01-02 00:33:01.788454 | orchestrator | Friday 02 January 2026 00:32:15 +0000 (0:00:08.721) 0:05:37.204 ******** 2026-01-02 00:33:01.788468 | orchestrator | changed: [testbed-manager] 2026-01-02 00:33:01.788481 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:33:01.788493 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:33:01.788507 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:33:01.788520 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:33:01.788546 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:33:01.788558 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:33:01.788571 | orchestrator | 2026-01-02 00:33:01.788585 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2026-01-02 00:33:01.788599 | orchestrator | Friday 02 January 2026 00:32:18 +0000 (0:00:03.392) 0:05:40.596 ******** 2026-01-02 00:33:01.788612 | orchestrator | ok: [testbed-manager] 2026-01-02 00:33:01.788625 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:33:01.788638 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:33:01.788720 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:33:01.788734 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:33:01.788746 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:33:01.788757 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:33:01.788768 | orchestrator | 2026-01-02 00:33:01.788779 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2026-01-02 00:33:01.788790 | orchestrator | Friday 02 January 2026 00:32:19 +0000 (0:00:01.322) 0:05:41.919 ******** 2026-01-02 00:33:01.788800 | orchestrator | ok: [testbed-manager] 2026-01-02 00:33:01.788811 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:33:01.788822 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:33:01.788833 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:33:01.788843 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:33:01.788854 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:33:01.788865 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:33:01.788876 | orchestrator | 2026-01-02 00:33:01.788887 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2026-01-02 00:33:01.788898 | orchestrator | Friday 02 January 2026 00:32:21 +0000 (0:00:01.501) 0:05:43.421 ******** 2026-01-02 00:33:01.788908 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:33:01.788919 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:33:01.788930 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:33:01.788941 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:33:01.788952 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:33:01.788963 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:33:01.788974 | orchestrator | changed: [testbed-manager] 2026-01-02 00:33:01.788985 | orchestrator | 2026-01-02 00:33:01.788996 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2026-01-02 00:33:01.789007 | orchestrator | Friday 02 January 2026 00:32:22 +0000 (0:00:00.572) 0:05:43.993 ******** 2026-01-02 00:33:01.789017 | orchestrator | ok: [testbed-manager] 2026-01-02 00:33:01.789028 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:33:01.789039 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:33:01.789049 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:33:01.789060 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:33:01.789071 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:33:01.789082 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:33:01.789092 | orchestrator | 2026-01-02 00:33:01.789104 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2026-01-02 00:33:01.789132 | orchestrator | Friday 02 January 2026 00:32:32 +0000 (0:00:10.031) 0:05:54.025 ******** 2026-01-02 00:33:01.789153 | orchestrator | changed: [testbed-manager] 2026-01-02 00:33:01.789164 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:33:01.789175 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:33:01.789186 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:33:01.789197 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:33:01.789208 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:33:01.789219 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:33:01.789229 | orchestrator | 2026-01-02 00:33:01.789241 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2026-01-02 00:33:01.789252 | orchestrator | Friday 02 January 2026 00:32:33 +0000 (0:00:00.940) 0:05:54.966 ******** 2026-01-02 00:33:01.789262 | orchestrator | ok: [testbed-manager] 2026-01-02 00:33:01.789274 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:33:01.789284 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:33:01.789296 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:33:01.789307 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:33:01.789318 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:33:01.789329 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:33:01.789340 | orchestrator | 2026-01-02 00:33:01.789351 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2026-01-02 00:33:01.789362 | orchestrator | Friday 02 January 2026 00:32:42 +0000 (0:00:09.855) 0:06:04.822 ******** 2026-01-02 00:33:01.789373 | orchestrator | ok: [testbed-manager] 2026-01-02 00:33:01.789384 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:33:01.789394 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:33:01.789405 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:33:01.789416 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:33:01.789427 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:33:01.789438 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:33:01.789449 | orchestrator | 2026-01-02 00:33:01.789460 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2026-01-02 00:33:01.789471 | orchestrator | Friday 02 January 2026 00:32:54 +0000 (0:00:12.007) 0:06:16.830 ******** 2026-01-02 00:33:01.789481 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2026-01-02 00:33:01.789492 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2026-01-02 00:33:01.789503 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2026-01-02 00:33:01.789514 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2026-01-02 00:33:01.789525 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2026-01-02 00:33:01.789536 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2026-01-02 00:33:01.789546 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2026-01-02 00:33:01.789557 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2026-01-02 00:33:01.789568 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2026-01-02 00:33:01.789579 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2026-01-02 00:33:01.789590 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2026-01-02 00:33:01.789601 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2026-01-02 00:33:01.789611 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2026-01-02 00:33:01.789623 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2026-01-02 00:33:01.789633 | orchestrator | 2026-01-02 00:33:01.789661 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2026-01-02 00:33:01.789673 | orchestrator | Friday 02 January 2026 00:32:56 +0000 (0:00:01.170) 0:06:18.000 ******** 2026-01-02 00:33:01.789684 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:33:01.789695 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:33:01.789706 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:33:01.789717 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:33:01.789728 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:33:01.789739 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:33:01.789750 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:33:01.789768 | orchestrator | 2026-01-02 00:33:01.789780 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2026-01-02 00:33:01.789790 | orchestrator | Friday 02 January 2026 00:32:56 +0000 (0:00:00.497) 0:06:18.497 ******** 2026-01-02 00:33:01.789801 | orchestrator | ok: [testbed-manager] 2026-01-02 00:33:01.789812 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:33:01.789823 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:33:01.789834 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:33:01.789845 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:33:01.789856 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:33:01.789866 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:33:01.789877 | orchestrator | 2026-01-02 00:33:01.789889 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2026-01-02 00:33:01.789901 | orchestrator | Friday 02 January 2026 00:33:00 +0000 (0:00:04.355) 0:06:22.852 ******** 2026-01-02 00:33:01.789911 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:33:01.789923 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:33:01.789934 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:33:01.789945 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:33:01.789956 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:33:01.789966 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:33:01.789977 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:33:01.789988 | orchestrator | 2026-01-02 00:33:01.790000 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2026-01-02 00:33:01.790011 | orchestrator | Friday 02 January 2026 00:33:01 +0000 (0:00:00.452) 0:06:23.305 ******** 2026-01-02 00:33:01.790080 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2026-01-02 00:33:01.790092 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2026-01-02 00:33:01.790103 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:33:01.790114 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2026-01-02 00:33:01.790125 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2026-01-02 00:33:01.790135 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:33:01.790146 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2026-01-02 00:33:01.790157 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2026-01-02 00:33:01.790168 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2026-01-02 00:33:01.790187 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2026-01-02 00:33:20.380368 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:33:20.380456 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2026-01-02 00:33:20.380465 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2026-01-02 00:33:20.380470 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:33:20.380474 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2026-01-02 00:33:20.380479 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2026-01-02 00:33:20.380483 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:33:20.380487 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:33:20.380491 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2026-01-02 00:33:20.380495 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2026-01-02 00:33:20.380499 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:33:20.380504 | orchestrator | 2026-01-02 00:33:20.380510 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2026-01-02 00:33:20.380516 | orchestrator | Friday 02 January 2026 00:33:02 +0000 (0:00:00.641) 0:06:23.946 ******** 2026-01-02 00:33:20.380520 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:33:20.380524 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:33:20.380528 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:33:20.380532 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:33:20.380539 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:33:20.380566 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:33:20.380574 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:33:20.380580 | orchestrator | 2026-01-02 00:33:20.380624 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2026-01-02 00:33:20.380681 | orchestrator | Friday 02 January 2026 00:33:02 +0000 (0:00:00.465) 0:06:24.412 ******** 2026-01-02 00:33:20.380686 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:33:20.380690 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:33:20.380694 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:33:20.380698 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:33:20.380702 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:33:20.380706 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:33:20.380710 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:33:20.380714 | orchestrator | 2026-01-02 00:33:20.380718 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2026-01-02 00:33:20.380723 | orchestrator | Friday 02 January 2026 00:33:02 +0000 (0:00:00.469) 0:06:24.881 ******** 2026-01-02 00:33:20.380727 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:33:20.380731 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:33:20.380735 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:33:20.380739 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:33:20.380742 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:33:20.380746 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:33:20.380750 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:33:20.380754 | orchestrator | 2026-01-02 00:33:20.380759 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2026-01-02 00:33:20.380762 | orchestrator | Friday 02 January 2026 00:33:03 +0000 (0:00:00.502) 0:06:25.384 ******** 2026-01-02 00:33:20.380767 | orchestrator | ok: [testbed-manager] 2026-01-02 00:33:20.380771 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:33:20.380775 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:33:20.380782 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:33:20.380786 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:33:20.380790 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:33:20.380794 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:33:20.380798 | orchestrator | 2026-01-02 00:33:20.380802 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2026-01-02 00:33:20.380806 | orchestrator | Friday 02 January 2026 00:33:05 +0000 (0:00:01.803) 0:06:27.188 ******** 2026-01-02 00:33:20.380811 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:33:20.380817 | orchestrator | 2026-01-02 00:33:20.380821 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2026-01-02 00:33:20.380825 | orchestrator | Friday 02 January 2026 00:33:06 +0000 (0:00:00.784) 0:06:27.973 ******** 2026-01-02 00:33:20.380829 | orchestrator | ok: [testbed-manager] 2026-01-02 00:33:20.380833 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:33:20.380837 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:33:20.380842 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:33:20.380845 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:33:20.380849 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:33:20.380853 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:33:20.380858 | orchestrator | 2026-01-02 00:33:20.380862 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2026-01-02 00:33:20.380866 | orchestrator | Friday 02 January 2026 00:33:06 +0000 (0:00:00.842) 0:06:28.815 ******** 2026-01-02 00:33:20.380870 | orchestrator | ok: [testbed-manager] 2026-01-02 00:33:20.380874 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:33:20.380878 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:33:20.380882 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:33:20.380885 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:33:20.380894 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:33:20.380898 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:33:20.380902 | orchestrator | 2026-01-02 00:33:20.380906 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2026-01-02 00:33:20.380910 | orchestrator | Friday 02 January 2026 00:33:07 +0000 (0:00:00.809) 0:06:29.624 ******** 2026-01-02 00:33:20.380913 | orchestrator | ok: [testbed-manager] 2026-01-02 00:33:20.380918 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:33:20.380923 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:33:20.380928 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:33:20.380933 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:33:20.380938 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:33:20.380942 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:33:20.380947 | orchestrator | 2026-01-02 00:33:20.380951 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2026-01-02 00:33:20.380969 | orchestrator | Friday 02 January 2026 00:33:09 +0000 (0:00:01.473) 0:06:31.097 ******** 2026-01-02 00:33:20.380974 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:33:20.380979 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:33:20.380983 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:33:20.380988 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:33:20.380992 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:33:20.380997 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:33:20.381001 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:33:20.381006 | orchestrator | 2026-01-02 00:33:20.381011 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2026-01-02 00:33:20.381016 | orchestrator | Friday 02 January 2026 00:33:10 +0000 (0:00:01.370) 0:06:32.468 ******** 2026-01-02 00:33:20.381020 | orchestrator | ok: [testbed-manager] 2026-01-02 00:33:20.381025 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:33:20.381030 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:33:20.381034 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:33:20.381038 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:33:20.381042 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:33:20.381047 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:33:20.381054 | orchestrator | 2026-01-02 00:33:20.381061 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2026-01-02 00:33:20.381067 | orchestrator | Friday 02 January 2026 00:33:11 +0000 (0:00:01.340) 0:06:33.808 ******** 2026-01-02 00:33:20.381074 | orchestrator | changed: [testbed-manager] 2026-01-02 00:33:20.381080 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:33:20.381087 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:33:20.381092 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:33:20.381099 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:33:20.381106 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:33:20.381112 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:33:20.381118 | orchestrator | 2026-01-02 00:33:20.381124 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2026-01-02 00:33:20.381130 | orchestrator | Friday 02 January 2026 00:33:13 +0000 (0:00:01.357) 0:06:35.166 ******** 2026-01-02 00:33:20.381136 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:33:20.381143 | orchestrator | 2026-01-02 00:33:20.381149 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2026-01-02 00:33:20.381155 | orchestrator | Friday 02 January 2026 00:33:14 +0000 (0:00:00.931) 0:06:36.098 ******** 2026-01-02 00:33:20.381161 | orchestrator | ok: [testbed-manager] 2026-01-02 00:33:20.381169 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:33:20.381175 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:33:20.381181 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:33:20.381188 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:33:20.381194 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:33:20.381210 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:33:20.381217 | orchestrator | 2026-01-02 00:33:20.381224 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2026-01-02 00:33:20.381230 | orchestrator | Friday 02 January 2026 00:33:15 +0000 (0:00:01.540) 0:06:37.639 ******** 2026-01-02 00:33:20.381236 | orchestrator | ok: [testbed-manager] 2026-01-02 00:33:20.381242 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:33:20.381249 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:33:20.381256 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:33:20.381263 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:33:20.381270 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:33:20.381276 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:33:20.381283 | orchestrator | 2026-01-02 00:33:20.381291 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2026-01-02 00:33:20.381295 | orchestrator | Friday 02 January 2026 00:33:16 +0000 (0:00:01.110) 0:06:38.749 ******** 2026-01-02 00:33:20.381299 | orchestrator | ok: [testbed-manager] 2026-01-02 00:33:20.381303 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:33:20.381307 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:33:20.381311 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:33:20.381315 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:33:20.381319 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:33:20.381323 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:33:20.381327 | orchestrator | 2026-01-02 00:33:20.381331 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2026-01-02 00:33:20.381336 | orchestrator | Friday 02 January 2026 00:33:17 +0000 (0:00:01.120) 0:06:39.870 ******** 2026-01-02 00:33:20.381340 | orchestrator | ok: [testbed-manager] 2026-01-02 00:33:20.381344 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:33:20.381348 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:33:20.381352 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:33:20.381355 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:33:20.381359 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:33:20.381363 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:33:20.381367 | orchestrator | 2026-01-02 00:33:20.381371 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2026-01-02 00:33:20.381375 | orchestrator | Friday 02 January 2026 00:33:19 +0000 (0:00:01.320) 0:06:41.191 ******** 2026-01-02 00:33:20.381379 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:33:20.381383 | orchestrator | 2026-01-02 00:33:20.381387 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-01-02 00:33:20.381391 | orchestrator | Friday 02 January 2026 00:33:20 +0000 (0:00:00.840) 0:06:42.031 ******** 2026-01-02 00:33:20.381395 | orchestrator | 2026-01-02 00:33:20.381399 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-01-02 00:33:20.381403 | orchestrator | Friday 02 January 2026 00:33:20 +0000 (0:00:00.038) 0:06:42.069 ******** 2026-01-02 00:33:20.381407 | orchestrator | 2026-01-02 00:33:20.381410 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-01-02 00:33:20.381414 | orchestrator | Friday 02 January 2026 00:33:20 +0000 (0:00:00.037) 0:06:42.107 ******** 2026-01-02 00:33:20.381418 | orchestrator | 2026-01-02 00:33:20.381422 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-01-02 00:33:20.381426 | orchestrator | Friday 02 January 2026 00:33:20 +0000 (0:00:00.043) 0:06:42.151 ******** 2026-01-02 00:33:20.381430 | orchestrator | 2026-01-02 00:33:20.381441 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-01-02 00:33:47.213766 | orchestrator | Friday 02 January 2026 00:33:20 +0000 (0:00:00.036) 0:06:42.187 ******** 2026-01-02 00:33:47.213849 | orchestrator | 2026-01-02 00:33:47.213857 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-01-02 00:33:47.213863 | orchestrator | Friday 02 January 2026 00:33:20 +0000 (0:00:00.037) 0:06:42.225 ******** 2026-01-02 00:33:47.213886 | orchestrator | 2026-01-02 00:33:47.213891 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-01-02 00:33:47.213896 | orchestrator | Friday 02 January 2026 00:33:20 +0000 (0:00:00.044) 0:06:42.269 ******** 2026-01-02 00:33:47.213901 | orchestrator | 2026-01-02 00:33:47.213905 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-01-02 00:33:47.213910 | orchestrator | Friday 02 January 2026 00:33:20 +0000 (0:00:00.036) 0:06:42.306 ******** 2026-01-02 00:33:47.213915 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:33:47.213922 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:33:47.213926 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:33:47.213931 | orchestrator | 2026-01-02 00:33:47.213935 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2026-01-02 00:33:47.213940 | orchestrator | Friday 02 January 2026 00:33:21 +0000 (0:00:01.369) 0:06:43.675 ******** 2026-01-02 00:33:47.213945 | orchestrator | changed: [testbed-manager] 2026-01-02 00:33:47.213951 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:33:47.213955 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:33:47.213960 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:33:47.213964 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:33:47.213969 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:33:47.213973 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:33:47.213978 | orchestrator | 2026-01-02 00:33:47.213983 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart logrotate service] *********** 2026-01-02 00:33:47.213988 | orchestrator | Friday 02 January 2026 00:33:23 +0000 (0:00:01.539) 0:06:45.215 ******** 2026-01-02 00:33:47.213992 | orchestrator | changed: [testbed-manager] 2026-01-02 00:33:47.213997 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:33:47.214001 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:33:47.214006 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:33:47.214010 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:33:47.214049 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:33:47.214055 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:33:47.214059 | orchestrator | 2026-01-02 00:33:47.214064 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2026-01-02 00:33:47.214069 | orchestrator | Friday 02 January 2026 00:33:24 +0000 (0:00:01.182) 0:06:46.397 ******** 2026-01-02 00:33:47.214073 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:33:47.214078 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:33:47.214082 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:33:47.214087 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:33:47.214092 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:33:47.214096 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:33:47.214101 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:33:47.214106 | orchestrator | 2026-01-02 00:33:47.214110 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2026-01-02 00:33:47.214115 | orchestrator | Friday 02 January 2026 00:33:26 +0000 (0:00:02.395) 0:06:48.792 ******** 2026-01-02 00:33:47.214130 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:33:47.214134 | orchestrator | 2026-01-02 00:33:47.214139 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2026-01-02 00:33:47.214144 | orchestrator | Friday 02 January 2026 00:33:26 +0000 (0:00:00.110) 0:06:48.903 ******** 2026-01-02 00:33:47.214148 | orchestrator | ok: [testbed-manager] 2026-01-02 00:33:47.214153 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:33:47.214158 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:33:47.214162 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:33:47.214167 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:33:47.214172 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:33:47.214177 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:33:47.214181 | orchestrator | 2026-01-02 00:33:47.214186 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2026-01-02 00:33:47.214192 | orchestrator | Friday 02 January 2026 00:33:27 +0000 (0:00:01.000) 0:06:49.903 ******** 2026-01-02 00:33:47.214202 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:33:47.214210 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:33:47.214217 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:33:47.214224 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:33:47.214232 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:33:47.214239 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:33:47.214246 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:33:47.214253 | orchestrator | 2026-01-02 00:33:47.214261 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2026-01-02 00:33:47.214268 | orchestrator | Friday 02 January 2026 00:33:28 +0000 (0:00:00.502) 0:06:50.405 ******** 2026-01-02 00:33:47.214276 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:33:47.214286 | orchestrator | 2026-01-02 00:33:47.214293 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2026-01-02 00:33:47.214300 | orchestrator | Friday 02 January 2026 00:33:29 +0000 (0:00:01.103) 0:06:51.509 ******** 2026-01-02 00:33:47.214308 | orchestrator | ok: [testbed-manager] 2026-01-02 00:33:47.214315 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:33:47.214323 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:33:47.214330 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:33:47.214339 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:33:47.214347 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:33:47.214356 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:33:47.214364 | orchestrator | 2026-01-02 00:33:47.214373 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2026-01-02 00:33:47.214382 | orchestrator | Friday 02 January 2026 00:33:30 +0000 (0:00:00.884) 0:06:52.393 ******** 2026-01-02 00:33:47.214391 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2026-01-02 00:33:47.214400 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2026-01-02 00:33:47.214423 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2026-01-02 00:33:47.214432 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2026-01-02 00:33:47.214441 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2026-01-02 00:33:47.214450 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2026-01-02 00:33:47.214459 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2026-01-02 00:33:47.214468 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2026-01-02 00:33:47.214476 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2026-01-02 00:33:47.214483 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2026-01-02 00:33:47.214491 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2026-01-02 00:33:47.214498 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2026-01-02 00:33:47.214505 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2026-01-02 00:33:47.214512 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2026-01-02 00:33:47.214519 | orchestrator | 2026-01-02 00:33:47.214527 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2026-01-02 00:33:47.214534 | orchestrator | Friday 02 January 2026 00:33:32 +0000 (0:00:02.523) 0:06:54.917 ******** 2026-01-02 00:33:47.214541 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:33:47.214549 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:33:47.214556 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:33:47.214563 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:33:47.214570 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:33:47.214577 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:33:47.214585 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:33:47.214592 | orchestrator | 2026-01-02 00:33:47.214599 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2026-01-02 00:33:47.214630 | orchestrator | Friday 02 January 2026 00:33:33 +0000 (0:00:00.633) 0:06:55.550 ******** 2026-01-02 00:33:47.214647 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:33:47.214656 | orchestrator | 2026-01-02 00:33:47.214664 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2026-01-02 00:33:47.214671 | orchestrator | Friday 02 January 2026 00:33:34 +0000 (0:00:00.814) 0:06:56.365 ******** 2026-01-02 00:33:47.214678 | orchestrator | ok: [testbed-manager] 2026-01-02 00:33:47.214685 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:33:47.214693 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:33:47.214700 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:33:47.214707 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:33:47.214714 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:33:47.214722 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:33:47.214729 | orchestrator | 2026-01-02 00:33:47.214736 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2026-01-02 00:33:47.214747 | orchestrator | Friday 02 January 2026 00:33:35 +0000 (0:00:00.838) 0:06:57.203 ******** 2026-01-02 00:33:47.214754 | orchestrator | ok: [testbed-manager] 2026-01-02 00:33:47.214762 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:33:47.214769 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:33:47.214776 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:33:47.214783 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:33:47.214791 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:33:47.214798 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:33:47.214805 | orchestrator | 2026-01-02 00:33:47.214812 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2026-01-02 00:33:47.214819 | orchestrator | Friday 02 January 2026 00:33:36 +0000 (0:00:01.053) 0:06:58.256 ******** 2026-01-02 00:33:47.214826 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:33:47.214834 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:33:47.214841 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:33:47.214848 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:33:47.214855 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:33:47.214863 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:33:47.214870 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:33:47.214877 | orchestrator | 2026-01-02 00:33:47.214885 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2026-01-02 00:33:47.214892 | orchestrator | Friday 02 January 2026 00:33:36 +0000 (0:00:00.509) 0:06:58.766 ******** 2026-01-02 00:33:47.214899 | orchestrator | ok: [testbed-manager] 2026-01-02 00:33:47.214906 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:33:47.214913 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:33:47.214921 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:33:47.214928 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:33:47.214935 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:33:47.214942 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:33:47.214949 | orchestrator | 2026-01-02 00:33:47.214956 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2026-01-02 00:33:47.214964 | orchestrator | Friday 02 January 2026 00:33:38 +0000 (0:00:01.510) 0:07:00.277 ******** 2026-01-02 00:33:47.214971 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:33:47.214978 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:33:47.214985 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:33:47.214993 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:33:47.215000 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:33:47.215007 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:33:47.215014 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:33:47.215021 | orchestrator | 2026-01-02 00:33:47.215029 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2026-01-02 00:33:47.215036 | orchestrator | Friday 02 January 2026 00:33:38 +0000 (0:00:00.496) 0:07:00.773 ******** 2026-01-02 00:33:47.215050 | orchestrator | ok: [testbed-manager] 2026-01-02 00:33:47.215058 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:33:47.215065 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:33:47.215072 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:33:47.215079 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:33:47.215087 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:33:47.215094 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:33:47.215101 | orchestrator | 2026-01-02 00:33:47.215113 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2026-01-02 00:34:20.105155 | orchestrator | Friday 02 January 2026 00:33:47 +0000 (0:00:08.366) 0:07:09.140 ******** 2026-01-02 00:34:20.105281 | orchestrator | ok: [testbed-manager] 2026-01-02 00:34:20.105299 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:34:20.105312 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:34:20.105324 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:34:20.105336 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:34:20.105347 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:34:20.105358 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:34:20.105369 | orchestrator | 2026-01-02 00:34:20.105382 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2026-01-02 00:34:20.105394 | orchestrator | Friday 02 January 2026 00:33:48 +0000 (0:00:01.533) 0:07:10.673 ******** 2026-01-02 00:34:20.105405 | orchestrator | ok: [testbed-manager] 2026-01-02 00:34:20.105416 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:34:20.105427 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:34:20.105438 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:34:20.105449 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:34:20.105461 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:34:20.105472 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:34:20.105483 | orchestrator | 2026-01-02 00:34:20.105494 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2026-01-02 00:34:20.105505 | orchestrator | Friday 02 January 2026 00:33:50 +0000 (0:00:01.730) 0:07:12.403 ******** 2026-01-02 00:34:20.105517 | orchestrator | ok: [testbed-manager] 2026-01-02 00:34:20.105528 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:34:20.105538 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:34:20.105549 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:34:20.105610 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:34:20.105623 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:34:20.105634 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:34:20.105645 | orchestrator | 2026-01-02 00:34:20.105656 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-01-02 00:34:20.105668 | orchestrator | Friday 02 January 2026 00:33:52 +0000 (0:00:01.706) 0:07:14.109 ******** 2026-01-02 00:34:20.105678 | orchestrator | ok: [testbed-manager] 2026-01-02 00:34:20.105691 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:34:20.105705 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:34:20.105718 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:34:20.105731 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:34:20.105743 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:34:20.105757 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:34:20.105771 | orchestrator | 2026-01-02 00:34:20.105785 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-01-02 00:34:20.105798 | orchestrator | Friday 02 January 2026 00:33:53 +0000 (0:00:00.946) 0:07:15.056 ******** 2026-01-02 00:34:20.105810 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:34:20.105824 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:34:20.105837 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:34:20.105850 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:34:20.105863 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:34:20.105877 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:34:20.105890 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:34:20.105903 | orchestrator | 2026-01-02 00:34:20.105917 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2026-01-02 00:34:20.105959 | orchestrator | Friday 02 January 2026 00:33:54 +0000 (0:00:01.029) 0:07:16.086 ******** 2026-01-02 00:34:20.105972 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:34:20.105985 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:34:20.105998 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:34:20.106011 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:34:20.106120 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:34:20.106134 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:34:20.106145 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:34:20.106157 | orchestrator | 2026-01-02 00:34:20.106168 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2026-01-02 00:34:20.106179 | orchestrator | Friday 02 January 2026 00:33:54 +0000 (0:00:00.558) 0:07:16.644 ******** 2026-01-02 00:34:20.106190 | orchestrator | ok: [testbed-manager] 2026-01-02 00:34:20.106201 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:34:20.106212 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:34:20.106223 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:34:20.106235 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:34:20.106246 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:34:20.106257 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:34:20.106268 | orchestrator | 2026-01-02 00:34:20.106279 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2026-01-02 00:34:20.106290 | orchestrator | Friday 02 January 2026 00:33:55 +0000 (0:00:00.535) 0:07:17.180 ******** 2026-01-02 00:34:20.106301 | orchestrator | ok: [testbed-manager] 2026-01-02 00:34:20.106312 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:34:20.106323 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:34:20.106334 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:34:20.106345 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:34:20.106357 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:34:20.106368 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:34:20.106378 | orchestrator | 2026-01-02 00:34:20.106390 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2026-01-02 00:34:20.106401 | orchestrator | Friday 02 January 2026 00:33:55 +0000 (0:00:00.517) 0:07:17.698 ******** 2026-01-02 00:34:20.106411 | orchestrator | ok: [testbed-manager] 2026-01-02 00:34:20.106422 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:34:20.106433 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:34:20.106444 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:34:20.106456 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:34:20.106466 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:34:20.106477 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:34:20.106488 | orchestrator | 2026-01-02 00:34:20.106499 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2026-01-02 00:34:20.106510 | orchestrator | Friday 02 January 2026 00:33:56 +0000 (0:00:00.697) 0:07:18.395 ******** 2026-01-02 00:34:20.106521 | orchestrator | ok: [testbed-manager] 2026-01-02 00:34:20.106532 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:34:20.106543 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:34:20.106554 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:34:20.106582 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:34:20.106593 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:34:20.106604 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:34:20.106615 | orchestrator | 2026-01-02 00:34:20.106627 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2026-01-02 00:34:20.106657 | orchestrator | Friday 02 January 2026 00:34:02 +0000 (0:00:05.983) 0:07:24.379 ******** 2026-01-02 00:34:20.106668 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:34:20.106680 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:34:20.106691 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:34:20.106702 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:34:20.106713 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:34:20.106724 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:34:20.106735 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:34:20.106746 | orchestrator | 2026-01-02 00:34:20.106757 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2026-01-02 00:34:20.106778 | orchestrator | Friday 02 January 2026 00:34:02 +0000 (0:00:00.533) 0:07:24.912 ******** 2026-01-02 00:34:20.106792 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:34:20.106805 | orchestrator | 2026-01-02 00:34:20.106816 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2026-01-02 00:34:20.106827 | orchestrator | Friday 02 January 2026 00:34:03 +0000 (0:00:00.976) 0:07:25.889 ******** 2026-01-02 00:34:20.106838 | orchestrator | ok: [testbed-manager] 2026-01-02 00:34:20.106849 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:34:20.106860 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:34:20.106871 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:34:20.106882 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:34:20.106893 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:34:20.106904 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:34:20.106915 | orchestrator | 2026-01-02 00:34:20.106926 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2026-01-02 00:34:20.106937 | orchestrator | Friday 02 January 2026 00:34:05 +0000 (0:00:01.900) 0:07:27.790 ******** 2026-01-02 00:34:20.106948 | orchestrator | ok: [testbed-manager] 2026-01-02 00:34:20.106959 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:34:20.106970 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:34:20.106981 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:34:20.106992 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:34:20.107003 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:34:20.107014 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:34:20.107025 | orchestrator | 2026-01-02 00:34:20.107036 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2026-01-02 00:34:20.107047 | orchestrator | Friday 02 January 2026 00:34:06 +0000 (0:00:01.101) 0:07:28.891 ******** 2026-01-02 00:34:20.107058 | orchestrator | ok: [testbed-manager] 2026-01-02 00:34:20.107069 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:34:20.107080 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:34:20.107091 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:34:20.107102 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:34:20.107131 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:34:20.107143 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:34:20.107154 | orchestrator | 2026-01-02 00:34:20.107165 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2026-01-02 00:34:20.107176 | orchestrator | Friday 02 January 2026 00:34:07 +0000 (0:00:00.863) 0:07:29.755 ******** 2026-01-02 00:34:20.107192 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-01-02 00:34:20.107212 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-01-02 00:34:20.107231 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-01-02 00:34:20.107250 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-01-02 00:34:20.107268 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-01-02 00:34:20.107287 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-01-02 00:34:20.107306 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-01-02 00:34:20.107322 | orchestrator | 2026-01-02 00:34:20.107334 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2026-01-02 00:34:20.107353 | orchestrator | Friday 02 January 2026 00:34:09 +0000 (0:00:01.857) 0:07:31.613 ******** 2026-01-02 00:34:20.107365 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:34:20.107376 | orchestrator | 2026-01-02 00:34:20.107388 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2026-01-02 00:34:20.107399 | orchestrator | Friday 02 January 2026 00:34:10 +0000 (0:00:00.779) 0:07:32.392 ******** 2026-01-02 00:34:20.107410 | orchestrator | changed: [testbed-manager] 2026-01-02 00:34:20.107421 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:34:20.107432 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:34:20.107443 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:34:20.107453 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:34:20.107464 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:34:20.107475 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:34:20.107486 | orchestrator | 2026-01-02 00:34:20.107497 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2026-01-02 00:34:20.107516 | orchestrator | Friday 02 January 2026 00:34:20 +0000 (0:00:09.639) 0:07:42.031 ******** 2026-01-02 00:34:51.285573 | orchestrator | ok: [testbed-manager] 2026-01-02 00:34:51.285717 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:34:51.285736 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:34:51.285748 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:34:51.285760 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:34:51.285771 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:34:51.285782 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:34:51.285794 | orchestrator | 2026-01-02 00:34:51.285808 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2026-01-02 00:34:51.285821 | orchestrator | Friday 02 January 2026 00:34:22 +0000 (0:00:02.049) 0:07:44.081 ******** 2026-01-02 00:34:51.285832 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:34:51.285843 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:34:51.285854 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:34:51.285865 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:34:51.285876 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:34:51.285887 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:34:51.285898 | orchestrator | 2026-01-02 00:34:51.285910 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2026-01-02 00:34:51.285921 | orchestrator | Friday 02 January 2026 00:34:23 +0000 (0:00:01.355) 0:07:45.436 ******** 2026-01-02 00:34:51.285932 | orchestrator | changed: [testbed-manager] 2026-01-02 00:34:51.285945 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:34:51.285956 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:34:51.285967 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:34:51.285978 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:34:51.285989 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:34:51.286000 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:34:51.286012 | orchestrator | 2026-01-02 00:34:51.286100 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2026-01-02 00:34:51.286114 | orchestrator | 2026-01-02 00:34:51.286127 | orchestrator | TASK [Include hardening role] ************************************************** 2026-01-02 00:34:51.286141 | orchestrator | Friday 02 January 2026 00:34:24 +0000 (0:00:01.242) 0:07:46.678 ******** 2026-01-02 00:34:51.286154 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:34:51.286167 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:34:51.286180 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:34:51.286193 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:34:51.286205 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:34:51.286218 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:34:51.286231 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:34:51.286243 | orchestrator | 2026-01-02 00:34:51.286256 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2026-01-02 00:34:51.286299 | orchestrator | 2026-01-02 00:34:51.286313 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2026-01-02 00:34:51.286327 | orchestrator | Friday 02 January 2026 00:34:25 +0000 (0:00:00.693) 0:07:47.372 ******** 2026-01-02 00:34:51.286339 | orchestrator | changed: [testbed-manager] 2026-01-02 00:34:51.286352 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:34:51.286365 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:34:51.286376 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:34:51.286387 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:34:51.286398 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:34:51.286409 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:34:51.286420 | orchestrator | 2026-01-02 00:34:51.286450 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2026-01-02 00:34:51.286462 | orchestrator | Friday 02 January 2026 00:34:26 +0000 (0:00:01.342) 0:07:48.715 ******** 2026-01-02 00:34:51.286473 | orchestrator | ok: [testbed-manager] 2026-01-02 00:34:51.286484 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:34:51.286495 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:34:51.286506 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:34:51.286533 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:34:51.286545 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:34:51.286556 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:34:51.286567 | orchestrator | 2026-01-02 00:34:51.286578 | orchestrator | TASK [Include auditd role] ***************************************************** 2026-01-02 00:34:51.286589 | orchestrator | Friday 02 January 2026 00:34:28 +0000 (0:00:01.482) 0:07:50.198 ******** 2026-01-02 00:34:51.286600 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:34:51.286611 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:34:51.286622 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:34:51.286633 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:34:51.286644 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:34:51.286655 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:34:51.286665 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:34:51.286676 | orchestrator | 2026-01-02 00:34:51.286687 | orchestrator | TASK [Include smartd role] ***************************************************** 2026-01-02 00:34:51.286698 | orchestrator | Friday 02 January 2026 00:34:28 +0000 (0:00:00.513) 0:07:50.711 ******** 2026-01-02 00:34:51.286710 | orchestrator | included: osism.services.smartd for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:34:51.286723 | orchestrator | 2026-01-02 00:34:51.286734 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2026-01-02 00:34:51.286745 | orchestrator | Friday 02 January 2026 00:34:29 +0000 (0:00:01.032) 0:07:51.743 ******** 2026-01-02 00:34:51.286759 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:34:51.286773 | orchestrator | 2026-01-02 00:34:51.286784 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2026-01-02 00:34:51.286795 | orchestrator | Friday 02 January 2026 00:34:30 +0000 (0:00:00.830) 0:07:52.574 ******** 2026-01-02 00:34:51.286805 | orchestrator | changed: [testbed-manager] 2026-01-02 00:34:51.286816 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:34:51.286827 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:34:51.286838 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:34:51.286849 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:34:51.286859 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:34:51.286870 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:34:51.286881 | orchestrator | 2026-01-02 00:34:51.286892 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2026-01-02 00:34:51.286925 | orchestrator | Friday 02 January 2026 00:34:39 +0000 (0:00:08.797) 0:08:01.372 ******** 2026-01-02 00:34:51.286947 | orchestrator | changed: [testbed-manager] 2026-01-02 00:34:51.286959 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:34:51.286969 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:34:51.286980 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:34:51.286991 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:34:51.287002 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:34:51.287013 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:34:51.287024 | orchestrator | 2026-01-02 00:34:51.287035 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2026-01-02 00:34:51.287046 | orchestrator | Friday 02 January 2026 00:34:40 +0000 (0:00:01.066) 0:08:02.438 ******** 2026-01-02 00:34:51.287057 | orchestrator | changed: [testbed-manager] 2026-01-02 00:34:51.287068 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:34:51.287079 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:34:51.287090 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:34:51.287101 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:34:51.287111 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:34:51.287122 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:34:51.287133 | orchestrator | 2026-01-02 00:34:51.287144 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2026-01-02 00:34:51.287155 | orchestrator | Friday 02 January 2026 00:34:41 +0000 (0:00:01.447) 0:08:03.886 ******** 2026-01-02 00:34:51.287166 | orchestrator | changed: [testbed-manager] 2026-01-02 00:34:51.287176 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:34:51.287187 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:34:51.287198 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:34:51.287209 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:34:51.287220 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:34:51.287230 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:34:51.287241 | orchestrator | 2026-01-02 00:34:51.287252 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2026-01-02 00:34:51.287263 | orchestrator | Friday 02 January 2026 00:34:43 +0000 (0:00:02.002) 0:08:05.888 ******** 2026-01-02 00:34:51.287274 | orchestrator | changed: [testbed-manager] 2026-01-02 00:34:51.287284 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:34:51.287295 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:34:51.287306 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:34:51.287317 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:34:51.287328 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:34:51.287339 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:34:51.287350 | orchestrator | 2026-01-02 00:34:51.287361 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2026-01-02 00:34:51.287371 | orchestrator | Friday 02 January 2026 00:34:45 +0000 (0:00:01.274) 0:08:07.162 ******** 2026-01-02 00:34:51.287382 | orchestrator | changed: [testbed-manager] 2026-01-02 00:34:51.287393 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:34:51.287404 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:34:51.287415 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:34:51.287426 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:34:51.287437 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:34:51.287448 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:34:51.287459 | orchestrator | 2026-01-02 00:34:51.287470 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2026-01-02 00:34:51.287486 | orchestrator | 2026-01-02 00:34:51.287497 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2026-01-02 00:34:51.287508 | orchestrator | Friday 02 January 2026 00:34:46 +0000 (0:00:01.207) 0:08:08.370 ******** 2026-01-02 00:34:51.287560 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:34:51.287573 | orchestrator | 2026-01-02 00:34:51.287584 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-01-02 00:34:51.287595 | orchestrator | Friday 02 January 2026 00:34:47 +0000 (0:00:00.761) 0:08:09.132 ******** 2026-01-02 00:34:51.287614 | orchestrator | ok: [testbed-manager] 2026-01-02 00:34:51.287625 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:34:51.287636 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:34:51.287647 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:34:51.287658 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:34:51.287669 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:34:51.287680 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:34:51.287691 | orchestrator | 2026-01-02 00:34:51.287702 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-01-02 00:34:51.287714 | orchestrator | Friday 02 January 2026 00:34:48 +0000 (0:00:01.094) 0:08:10.227 ******** 2026-01-02 00:34:51.287724 | orchestrator | changed: [testbed-manager] 2026-01-02 00:34:51.287735 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:34:51.287746 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:34:51.287757 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:34:51.287768 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:34:51.287779 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:34:51.287790 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:34:51.287801 | orchestrator | 2026-01-02 00:34:51.287812 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2026-01-02 00:34:51.287823 | orchestrator | Friday 02 January 2026 00:34:49 +0000 (0:00:01.161) 0:08:11.388 ******** 2026-01-02 00:34:51.287834 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:34:51.287845 | orchestrator | 2026-01-02 00:34:51.287856 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-01-02 00:34:51.287867 | orchestrator | Friday 02 January 2026 00:34:50 +0000 (0:00:00.795) 0:08:12.184 ******** 2026-01-02 00:34:51.287878 | orchestrator | ok: [testbed-manager] 2026-01-02 00:34:51.287889 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:34:51.287900 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:34:51.287911 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:34:51.287922 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:34:51.287933 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:34:51.287944 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:34:51.287955 | orchestrator | 2026-01-02 00:34:51.287966 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-01-02 00:34:51.287985 | orchestrator | Friday 02 January 2026 00:34:51 +0000 (0:00:01.029) 0:08:13.213 ******** 2026-01-02 00:34:52.939064 | orchestrator | changed: [testbed-manager] 2026-01-02 00:34:52.939216 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:34:52.939233 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:34:52.939248 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:34:52.939279 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:34:52.939292 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:34:52.939304 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:34:52.939317 | orchestrator | 2026-01-02 00:34:52.939331 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-02 00:34:52.939345 | orchestrator | testbed-manager : ok=168  changed=40  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2026-01-02 00:34:52.939358 | orchestrator | testbed-node-0 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-01-02 00:34:52.939369 | orchestrator | testbed-node-1 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-01-02 00:34:52.939381 | orchestrator | testbed-node-2 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-01-02 00:34:52.939391 | orchestrator | testbed-node-3 : ok=175  changed=65  unreachable=0 failed=0 skipped=38  rescued=0 ignored=0 2026-01-02 00:34:52.939432 | orchestrator | testbed-node-4 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-01-02 00:34:52.939443 | orchestrator | testbed-node-5 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-01-02 00:34:52.939454 | orchestrator | 2026-01-02 00:34:52.939466 | orchestrator | 2026-01-02 00:34:52.939477 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-02 00:34:52.939488 | orchestrator | Friday 02 January 2026 00:34:52 +0000 (0:00:01.123) 0:08:14.336 ******** 2026-01-02 00:34:52.939498 | orchestrator | =============================================================================== 2026-01-02 00:34:52.939509 | orchestrator | osism.commons.packages : Install required packages --------------------- 80.57s 2026-01-02 00:34:52.939554 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 36.23s 2026-01-02 00:34:52.939570 | orchestrator | osism.commons.packages : Download required packages -------------------- 36.12s 2026-01-02 00:34:52.939584 | orchestrator | osism.commons.repository : Update package cache ------------------------ 16.45s 2026-01-02 00:34:52.939614 | orchestrator | osism.services.docker : Install docker package ------------------------- 12.01s 2026-01-02 00:34:52.939627 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 11.08s 2026-01-02 00:34:52.939640 | orchestrator | osism.services.docker : Install containerd package --------------------- 10.03s 2026-01-02 00:34:52.939653 | orchestrator | osism.services.docker : Install docker-cli package ---------------------- 9.86s 2026-01-02 00:34:52.939666 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required --- 9.69s 2026-01-02 00:34:52.939680 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 9.64s 2026-01-02 00:34:52.939693 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 9.02s 2026-01-02 00:34:52.939707 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 8.80s 2026-01-02 00:34:52.939720 | orchestrator | osism.services.docker : Add repository ---------------------------------- 8.72s 2026-01-02 00:34:52.939733 | orchestrator | osism.services.rng : Install rng package -------------------------------- 8.59s 2026-01-02 00:34:52.939746 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 8.39s 2026-01-02 00:34:52.939760 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 8.37s 2026-01-02 00:34:52.939773 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 7.18s 2026-01-02 00:34:52.939786 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 7.11s 2026-01-02 00:34:52.939799 | orchestrator | osism.commons.services : Populate service facts ------------------------- 6.02s 2026-01-02 00:34:52.939813 | orchestrator | osism.services.chrony : Populate service facts -------------------------- 5.98s 2026-01-02 00:34:53.214653 | orchestrator | + osism apply fail2ban 2026-01-02 00:35:05.771079 | orchestrator | 2026-01-02 00:35:05 | INFO  | Task 2231c029-e59e-4291-a844-56bf48438602 (fail2ban) was prepared for execution. 2026-01-02 00:35:05.771234 | orchestrator | 2026-01-02 00:35:05 | INFO  | It takes a moment until task 2231c029-e59e-4291-a844-56bf48438602 (fail2ban) has been started and output is visible here. 2026-01-02 00:35:27.031002 | orchestrator | 2026-01-02 00:35:27.031155 | orchestrator | PLAY [Apply role fail2ban] ***************************************************** 2026-01-02 00:35:27.031180 | orchestrator | 2026-01-02 00:35:27.031196 | orchestrator | TASK [osism.services.fail2ban : Include distribution specific install tasks] *** 2026-01-02 00:35:27.031213 | orchestrator | Friday 02 January 2026 00:35:09 +0000 (0:00:00.247) 0:00:00.247 ******** 2026-01-02 00:35:27.031231 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/fail2ban/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-02 00:35:27.031283 | orchestrator | 2026-01-02 00:35:27.031298 | orchestrator | TASK [osism.services.fail2ban : Install fail2ban package] ********************** 2026-01-02 00:35:27.031312 | orchestrator | Friday 02 January 2026 00:35:10 +0000 (0:00:01.102) 0:00:01.350 ******** 2026-01-02 00:35:27.031327 | orchestrator | changed: [testbed-manager] 2026-01-02 00:35:27.031344 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:35:27.031359 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:35:27.031374 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:35:27.031388 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:35:27.031414 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:35:27.031430 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:35:27.031445 | orchestrator | 2026-01-02 00:35:27.031480 | orchestrator | TASK [osism.services.fail2ban : Copy configuration files] ********************** 2026-01-02 00:35:27.031495 | orchestrator | Friday 02 January 2026 00:35:22 +0000 (0:00:11.041) 0:00:12.392 ******** 2026-01-02 00:35:27.031509 | orchestrator | changed: [testbed-manager] 2026-01-02 00:35:27.031525 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:35:27.031541 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:35:27.031556 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:35:27.031570 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:35:27.031585 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:35:27.031600 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:35:27.031615 | orchestrator | 2026-01-02 00:35:27.031630 | orchestrator | TASK [osism.services.fail2ban : Manage fail2ban service] *********************** 2026-01-02 00:35:27.031648 | orchestrator | Friday 02 January 2026 00:35:23 +0000 (0:00:01.418) 0:00:13.811 ******** 2026-01-02 00:35:27.031664 | orchestrator | ok: [testbed-manager] 2026-01-02 00:35:27.031682 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:35:27.031697 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:35:27.031712 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:35:27.031728 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:35:27.031743 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:35:27.031756 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:35:27.031772 | orchestrator | 2026-01-02 00:35:27.031788 | orchestrator | TASK [osism.services.fail2ban : Reload fail2ban configuration] ***************** 2026-01-02 00:35:27.031804 | orchestrator | Friday 02 January 2026 00:35:24 +0000 (0:00:01.450) 0:00:15.261 ******** 2026-01-02 00:35:27.031819 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:35:27.031835 | orchestrator | changed: [testbed-manager] 2026-01-02 00:35:27.031850 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:35:27.031867 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:35:27.031884 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:35:27.031900 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:35:27.031915 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:35:27.031930 | orchestrator | 2026-01-02 00:35:27.031946 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-02 00:35:27.031962 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-02 00:35:27.031998 | orchestrator | testbed-node-0 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-02 00:35:27.032013 | orchestrator | testbed-node-1 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-02 00:35:27.032028 | orchestrator | testbed-node-2 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-02 00:35:27.032043 | orchestrator | testbed-node-3 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-02 00:35:27.032058 | orchestrator | testbed-node-4 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-02 00:35:27.032072 | orchestrator | testbed-node-5 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-02 00:35:27.032104 | orchestrator | 2026-01-02 00:35:27.032120 | orchestrator | 2026-01-02 00:35:27.032136 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-02 00:35:27.032151 | orchestrator | Friday 02 January 2026 00:35:26 +0000 (0:00:01.727) 0:00:16.989 ******** 2026-01-02 00:35:27.032166 | orchestrator | =============================================================================== 2026-01-02 00:35:27.032181 | orchestrator | osism.services.fail2ban : Install fail2ban package --------------------- 11.04s 2026-01-02 00:35:27.032196 | orchestrator | osism.services.fail2ban : Reload fail2ban configuration ----------------- 1.73s 2026-01-02 00:35:27.032208 | orchestrator | osism.services.fail2ban : Manage fail2ban service ----------------------- 1.45s 2026-01-02 00:35:27.032222 | orchestrator | osism.services.fail2ban : Copy configuration files ---------------------- 1.42s 2026-01-02 00:35:27.032237 | orchestrator | osism.services.fail2ban : Include distribution specific install tasks --- 1.10s 2026-01-02 00:35:27.329837 | orchestrator | + [[ -e /etc/redhat-release ]] 2026-01-02 00:35:27.329941 | orchestrator | + osism apply network 2026-01-02 00:35:39.701113 | orchestrator | 2026-01-02 00:35:39 | INFO  | Task c6808943-d8f2-42e2-a917-4cd019c5b12c (network) was prepared for execution. 2026-01-02 00:35:39.701220 | orchestrator | 2026-01-02 00:35:39 | INFO  | It takes a moment until task c6808943-d8f2-42e2-a917-4cd019c5b12c (network) has been started and output is visible here. 2026-01-02 00:36:06.840565 | orchestrator | 2026-01-02 00:36:06.840688 | orchestrator | PLAY [Apply role network] ****************************************************** 2026-01-02 00:36:06.840708 | orchestrator | 2026-01-02 00:36:06.840721 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2026-01-02 00:36:06.840734 | orchestrator | Friday 02 January 2026 00:35:43 +0000 (0:00:00.187) 0:00:00.187 ******** 2026-01-02 00:36:06.840746 | orchestrator | ok: [testbed-manager] 2026-01-02 00:36:06.840761 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:36:06.840773 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:36:06.840785 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:36:06.840796 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:36:06.840808 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:36:06.840819 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:36:06.840831 | orchestrator | 2026-01-02 00:36:06.840842 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2026-01-02 00:36:06.840853 | orchestrator | Friday 02 January 2026 00:35:44 +0000 (0:00:00.499) 0:00:00.687 ******** 2026-01-02 00:36:06.840867 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-02 00:36:06.840880 | orchestrator | 2026-01-02 00:36:06.840892 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2026-01-02 00:36:06.840903 | orchestrator | Friday 02 January 2026 00:35:45 +0000 (0:00:00.850) 0:00:01.537 ******** 2026-01-02 00:36:06.840914 | orchestrator | ok: [testbed-manager] 2026-01-02 00:36:06.840926 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:36:06.840937 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:36:06.840948 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:36:06.840960 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:36:06.840971 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:36:06.840982 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:36:06.840993 | orchestrator | 2026-01-02 00:36:06.841005 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2026-01-02 00:36:06.841016 | orchestrator | Friday 02 January 2026 00:35:47 +0000 (0:00:02.151) 0:00:03.688 ******** 2026-01-02 00:36:06.841028 | orchestrator | ok: [testbed-manager] 2026-01-02 00:36:06.841039 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:36:06.841053 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:36:06.841066 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:36:06.841080 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:36:06.841119 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:36:06.841133 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:36:06.841147 | orchestrator | 2026-01-02 00:36:06.841161 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2026-01-02 00:36:06.841174 | orchestrator | Friday 02 January 2026 00:35:48 +0000 (0:00:01.627) 0:00:05.316 ******** 2026-01-02 00:36:06.841188 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2026-01-02 00:36:06.841201 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2026-01-02 00:36:06.841214 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2026-01-02 00:36:06.841227 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2026-01-02 00:36:06.841241 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2026-01-02 00:36:06.841254 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2026-01-02 00:36:06.841267 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2026-01-02 00:36:06.841280 | orchestrator | 2026-01-02 00:36:06.841294 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2026-01-02 00:36:06.841307 | orchestrator | Friday 02 January 2026 00:35:49 +0000 (0:00:00.958) 0:00:06.275 ******** 2026-01-02 00:36:06.841320 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-01-02 00:36:06.841334 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-02 00:36:06.841348 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-01-02 00:36:06.841361 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-01-02 00:36:06.841375 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-02 00:36:06.841389 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-01-02 00:36:06.841434 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-01-02 00:36:06.841447 | orchestrator | 2026-01-02 00:36:06.841458 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2026-01-02 00:36:06.841470 | orchestrator | Friday 02 January 2026 00:35:52 +0000 (0:00:02.948) 0:00:09.224 ******** 2026-01-02 00:36:06.841481 | orchestrator | changed: [testbed-manager] 2026-01-02 00:36:06.841493 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:36:06.841504 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:36:06.841515 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:36:06.841526 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:36:06.841537 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:36:06.841548 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:36:06.841559 | orchestrator | 2026-01-02 00:36:06.841570 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2026-01-02 00:36:06.841581 | orchestrator | Friday 02 January 2026 00:35:54 +0000 (0:00:01.559) 0:00:10.784 ******** 2026-01-02 00:36:06.841592 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-02 00:36:06.841604 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-02 00:36:06.841615 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-01-02 00:36:06.841625 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-01-02 00:36:06.841637 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-01-02 00:36:06.841648 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-01-02 00:36:06.841659 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-01-02 00:36:06.841670 | orchestrator | 2026-01-02 00:36:06.841681 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2026-01-02 00:36:06.841692 | orchestrator | Friday 02 January 2026 00:35:56 +0000 (0:00:01.576) 0:00:12.360 ******** 2026-01-02 00:36:06.841703 | orchestrator | ok: [testbed-manager] 2026-01-02 00:36:06.841714 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:36:06.841725 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:36:06.841737 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:36:06.841748 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:36:06.841759 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:36:06.841770 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:36:06.841781 | orchestrator | 2026-01-02 00:36:06.841792 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2026-01-02 00:36:06.841822 | orchestrator | Friday 02 January 2026 00:35:57 +0000 (0:00:01.056) 0:00:13.416 ******** 2026-01-02 00:36:06.841842 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:36:06.841854 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:36:06.841865 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:36:06.841877 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:36:06.841888 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:36:06.841899 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:36:06.841910 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:36:06.841921 | orchestrator | 2026-01-02 00:36:06.841932 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2026-01-02 00:36:06.841943 | orchestrator | Friday 02 January 2026 00:35:57 +0000 (0:00:00.652) 0:00:14.068 ******** 2026-01-02 00:36:06.841954 | orchestrator | ok: [testbed-manager] 2026-01-02 00:36:06.841965 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:36:06.841976 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:36:06.841987 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:36:06.841998 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:36:06.842009 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:36:06.842075 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:36:06.842087 | orchestrator | 2026-01-02 00:36:06.842098 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2026-01-02 00:36:06.842109 | orchestrator | Friday 02 January 2026 00:35:59 +0000 (0:00:02.141) 0:00:16.210 ******** 2026-01-02 00:36:06.842120 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:36:06.842132 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:36:06.842143 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:36:06.842154 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:36:06.842165 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:36:06.842176 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:36:06.842188 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2026-01-02 00:36:06.842201 | orchestrator | 2026-01-02 00:36:06.842212 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2026-01-02 00:36:06.842223 | orchestrator | Friday 02 January 2026 00:36:00 +0000 (0:00:00.879) 0:00:17.090 ******** 2026-01-02 00:36:06.842250 | orchestrator | ok: [testbed-manager] 2026-01-02 00:36:06.842262 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:36:06.842273 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:36:06.842284 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:36:06.842295 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:36:06.842306 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:36:06.842317 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:36:06.842328 | orchestrator | 2026-01-02 00:36:06.842339 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2026-01-02 00:36:06.842350 | orchestrator | Friday 02 January 2026 00:36:02 +0000 (0:00:01.682) 0:00:18.772 ******** 2026-01-02 00:36:06.842361 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-02 00:36:06.842374 | orchestrator | 2026-01-02 00:36:06.842385 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-01-02 00:36:06.842396 | orchestrator | Friday 02 January 2026 00:36:03 +0000 (0:00:01.280) 0:00:20.052 ******** 2026-01-02 00:36:06.842437 | orchestrator | ok: [testbed-manager] 2026-01-02 00:36:06.842456 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:36:06.842476 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:36:06.842501 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:36:06.842520 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:36:06.842532 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:36:06.842543 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:36:06.842553 | orchestrator | 2026-01-02 00:36:06.842564 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2026-01-02 00:36:06.842575 | orchestrator | Friday 02 January 2026 00:36:04 +0000 (0:00:01.149) 0:00:21.202 ******** 2026-01-02 00:36:06.842595 | orchestrator | ok: [testbed-manager] 2026-01-02 00:36:06.842606 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:36:06.842617 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:36:06.842628 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:36:06.842639 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:36:06.842650 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:36:06.842660 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:36:06.842671 | orchestrator | 2026-01-02 00:36:06.842682 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-01-02 00:36:06.842693 | orchestrator | Friday 02 January 2026 00:36:05 +0000 (0:00:00.654) 0:00:21.856 ******** 2026-01-02 00:36:06.842704 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2026-01-02 00:36:06.842715 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2026-01-02 00:36:06.842726 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2026-01-02 00:36:06.842736 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2026-01-02 00:36:06.842747 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2026-01-02 00:36:06.842758 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2026-01-02 00:36:06.842769 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2026-01-02 00:36:06.842779 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2026-01-02 00:36:06.842790 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2026-01-02 00:36:06.842801 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2026-01-02 00:36:06.842812 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2026-01-02 00:36:06.842823 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2026-01-02 00:36:06.842833 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2026-01-02 00:36:06.842844 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2026-01-02 00:36:06.842855 | orchestrator | 2026-01-02 00:36:06.842875 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2026-01-02 00:36:22.119137 | orchestrator | Friday 02 January 2026 00:36:06 +0000 (0:00:01.298) 0:00:23.155 ******** 2026-01-02 00:36:22.119254 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:36:22.119274 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:36:22.119287 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:36:22.119299 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:36:22.119310 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:36:22.119340 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:36:22.119353 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:36:22.119374 | orchestrator | 2026-01-02 00:36:22.119414 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2026-01-02 00:36:22.119435 | orchestrator | Friday 02 January 2026 00:36:07 +0000 (0:00:00.664) 0:00:23.819 ******** 2026-01-02 00:36:22.119458 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-manager, testbed-node-1, testbed-node-0, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-02 00:36:22.119480 | orchestrator | 2026-01-02 00:36:22.119492 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2026-01-02 00:36:22.119503 | orchestrator | Friday 02 January 2026 00:36:11 +0000 (0:00:04.407) 0:00:28.226 ******** 2026-01-02 00:36:22.119516 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2026-01-02 00:36:22.119530 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2026-01-02 00:36:22.119569 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2026-01-02 00:36:22.119581 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2026-01-02 00:36:22.119593 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2026-01-02 00:36:22.119622 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2026-01-02 00:36:22.119633 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2026-01-02 00:36:22.119644 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2026-01-02 00:36:22.119655 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2026-01-02 00:36:22.119676 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2026-01-02 00:36:22.119690 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2026-01-02 00:36:22.119721 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2026-01-02 00:36:22.119736 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2026-01-02 00:36:22.119748 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2026-01-02 00:36:22.119760 | orchestrator | 2026-01-02 00:36:22.119774 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2026-01-02 00:36:22.119787 | orchestrator | Friday 02 January 2026 00:36:17 +0000 (0:00:05.196) 0:00:33.423 ******** 2026-01-02 00:36:22.119801 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2026-01-02 00:36:22.119823 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2026-01-02 00:36:22.119836 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2026-01-02 00:36:22.119847 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2026-01-02 00:36:22.119858 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2026-01-02 00:36:22.119874 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2026-01-02 00:36:22.119886 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2026-01-02 00:36:22.119897 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2026-01-02 00:36:22.119908 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2026-01-02 00:36:22.119919 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2026-01-02 00:36:22.119930 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2026-01-02 00:36:22.119942 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2026-01-02 00:36:22.119965 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2026-01-02 00:36:35.343336 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2026-01-02 00:36:35.343539 | orchestrator | 2026-01-02 00:36:35.343559 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2026-01-02 00:36:35.343572 | orchestrator | Friday 02 January 2026 00:36:22 +0000 (0:00:05.006) 0:00:38.429 ******** 2026-01-02 00:36:35.343586 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-02 00:36:35.343597 | orchestrator | 2026-01-02 00:36:35.343609 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-01-02 00:36:35.343620 | orchestrator | Friday 02 January 2026 00:36:23 +0000 (0:00:01.061) 0:00:39.491 ******** 2026-01-02 00:36:35.343631 | orchestrator | ok: [testbed-manager] 2026-01-02 00:36:35.343645 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:36:35.343656 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:36:35.343667 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:36:35.343678 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:36:35.343689 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:36:35.343701 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:36:35.343711 | orchestrator | 2026-01-02 00:36:35.343723 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-01-02 00:36:35.343734 | orchestrator | Friday 02 January 2026 00:36:24 +0000 (0:00:00.986) 0:00:40.477 ******** 2026-01-02 00:36:35.343745 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2026-01-02 00:36:35.343756 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2026-01-02 00:36:35.343767 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-01-02 00:36:35.343778 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-01-02 00:36:35.343789 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2026-01-02 00:36:35.343800 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2026-01-02 00:36:35.343811 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-01-02 00:36:35.343821 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-01-02 00:36:35.343832 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:36:35.343844 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2026-01-02 00:36:35.343870 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2026-01-02 00:36:35.343884 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-01-02 00:36:35.343896 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-01-02 00:36:35.343909 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:36:35.343922 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2026-01-02 00:36:35.343935 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2026-01-02 00:36:35.343949 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-01-02 00:36:35.343962 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-01-02 00:36:35.343975 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:36:35.343988 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2026-01-02 00:36:35.344001 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2026-01-02 00:36:35.344014 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-01-02 00:36:35.344027 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-01-02 00:36:35.344039 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:36:35.344061 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2026-01-02 00:36:35.344101 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2026-01-02 00:36:35.344114 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-01-02 00:36:35.344128 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-01-02 00:36:35.344141 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:36:35.344152 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:36:35.344163 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2026-01-02 00:36:35.344174 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2026-01-02 00:36:35.344185 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-01-02 00:36:35.344196 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-01-02 00:36:35.344207 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:36:35.344218 | orchestrator | 2026-01-02 00:36:35.344229 | orchestrator | TASK [osism.commons.network : Include network extra init] ********************** 2026-01-02 00:36:35.344259 | orchestrator | Friday 02 January 2026 00:36:24 +0000 (0:00:00.773) 0:00:41.251 ******** 2026-01-02 00:36:35.344272 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/network-extra-init.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-02 00:36:35.344283 | orchestrator | 2026-01-02 00:36:35.344294 | orchestrator | TASK [osism.commons.network : Install required packages for network-extra-init] *** 2026-01-02 00:36:35.344305 | orchestrator | Friday 02 January 2026 00:36:25 +0000 (0:00:01.073) 0:00:42.325 ******** 2026-01-02 00:36:35.344316 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:36:35.344327 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:36:35.344338 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:36:35.344349 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:36:35.344360 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:36:35.344371 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:36:35.344407 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:36:35.344418 | orchestrator | 2026-01-02 00:36:35.344429 | orchestrator | TASK [osism.commons.network : Deploy network-extra-init script] **************** 2026-01-02 00:36:35.344440 | orchestrator | Friday 02 January 2026 00:36:26 +0000 (0:00:00.597) 0:00:42.922 ******** 2026-01-02 00:36:35.344451 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:36:35.344462 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:36:35.344473 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:36:35.344484 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:36:35.344494 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:36:35.344505 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:36:35.344516 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:36:35.344527 | orchestrator | 2026-01-02 00:36:35.344538 | orchestrator | TASK [osism.commons.network : Deploy network-extra-init systemd service] ******* 2026-01-02 00:36:35.344549 | orchestrator | Friday 02 January 2026 00:36:27 +0000 (0:00:00.795) 0:00:43.718 ******** 2026-01-02 00:36:35.344559 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:36:35.344570 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:36:35.344581 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:36:35.344591 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:36:35.344602 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:36:35.344613 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:36:35.344624 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:36:35.344635 | orchestrator | 2026-01-02 00:36:35.344645 | orchestrator | TASK [osism.commons.network : Enable and start network-extra-init service] ***** 2026-01-02 00:36:35.344656 | orchestrator | Friday 02 January 2026 00:36:28 +0000 (0:00:00.616) 0:00:44.335 ******** 2026-01-02 00:36:35.344667 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:36:35.344686 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:36:35.344696 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:36:35.344707 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:36:35.344718 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:36:35.344729 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:36:35.344740 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:36:35.344751 | orchestrator | 2026-01-02 00:36:35.344762 | orchestrator | TASK [osism.commons.network : Disable and stop network-extra-init service] ***** 2026-01-02 00:36:35.344779 | orchestrator | Friday 02 January 2026 00:36:28 +0000 (0:00:00.790) 0:00:45.125 ******** 2026-01-02 00:36:35.344791 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:36:35.344802 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:36:35.344813 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:36:35.344824 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:36:35.344834 | orchestrator | ok: [testbed-manager] 2026-01-02 00:36:35.344845 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:36:35.344856 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:36:35.344867 | orchestrator | 2026-01-02 00:36:35.344878 | orchestrator | TASK [osism.commons.network : Remove network-extra-init systemd service] ******* 2026-01-02 00:36:35.344889 | orchestrator | Friday 02 January 2026 00:36:30 +0000 (0:00:01.532) 0:00:46.658 ******** 2026-01-02 00:36:35.344900 | orchestrator | ok: [testbed-manager] 2026-01-02 00:36:35.344911 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:36:35.344922 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:36:35.344933 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:36:35.344943 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:36:35.344954 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:36:35.344965 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:36:35.344976 | orchestrator | 2026-01-02 00:36:35.344987 | orchestrator | TASK [osism.commons.network : Remove network-extra-init script] **************** 2026-01-02 00:36:35.344998 | orchestrator | Friday 02 January 2026 00:36:31 +0000 (0:00:01.339) 0:00:47.997 ******** 2026-01-02 00:36:35.345008 | orchestrator | ok: [testbed-manager] 2026-01-02 00:36:35.345019 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:36:35.345030 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:36:35.345041 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:36:35.345052 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:36:35.345063 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:36:35.345074 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:36:35.345084 | orchestrator | 2026-01-02 00:36:35.345095 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2026-01-02 00:36:35.345106 | orchestrator | Friday 02 January 2026 00:36:33 +0000 (0:00:02.322) 0:00:50.320 ******** 2026-01-02 00:36:35.345117 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:36:35.345129 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:36:35.345139 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:36:35.345150 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:36:35.345161 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:36:35.345172 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:36:35.345183 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:36:35.345194 | orchestrator | 2026-01-02 00:36:35.345205 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2026-01-02 00:36:35.345216 | orchestrator | Friday 02 January 2026 00:36:34 +0000 (0:00:00.630) 0:00:50.950 ******** 2026-01-02 00:36:35.345226 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:36:35.345237 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:36:35.345248 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:36:35.345259 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:36:35.345269 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:36:35.345281 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:36:35.345291 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:36:35.345302 | orchestrator | 2026-01-02 00:36:35.345313 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-02 00:36:35.684518 | orchestrator | testbed-manager : ok=25  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-01-02 00:36:35.684652 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2026-01-02 00:36:35.684668 | orchestrator | testbed-node-1 : ok=24  changed=5  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2026-01-02 00:36:35.684680 | orchestrator | testbed-node-2 : ok=24  changed=5  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2026-01-02 00:36:35.684692 | orchestrator | testbed-node-3 : ok=24  changed=5  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2026-01-02 00:36:35.684703 | orchestrator | testbed-node-4 : ok=24  changed=5  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2026-01-02 00:36:35.684714 | orchestrator | testbed-node-5 : ok=24  changed=5  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2026-01-02 00:36:35.684726 | orchestrator | 2026-01-02 00:36:35.684738 | orchestrator | 2026-01-02 00:36:35.684749 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-02 00:36:35.684761 | orchestrator | Friday 02 January 2026 00:36:35 +0000 (0:00:00.706) 0:00:51.657 ******** 2026-01-02 00:36:35.684773 | orchestrator | =============================================================================== 2026-01-02 00:36:35.684784 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 5.20s 2026-01-02 00:36:35.684795 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 5.01s 2026-01-02 00:36:35.684805 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 4.41s 2026-01-02 00:36:35.684816 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 2.95s 2026-01-02 00:36:35.684827 | orchestrator | osism.commons.network : Remove network-extra-init script ---------------- 2.32s 2026-01-02 00:36:35.684837 | orchestrator | osism.commons.network : Install required packages ----------------------- 2.15s 2026-01-02 00:36:35.684848 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.14s 2026-01-02 00:36:35.684858 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.68s 2026-01-02 00:36:35.684869 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.63s 2026-01-02 00:36:35.684894 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.58s 2026-01-02 00:36:35.684906 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.56s 2026-01-02 00:36:35.684916 | orchestrator | osism.commons.network : Disable and stop network-extra-init service ----- 1.53s 2026-01-02 00:36:35.684927 | orchestrator | osism.commons.network : Remove network-extra-init systemd service ------- 1.34s 2026-01-02 00:36:35.684938 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.30s 2026-01-02 00:36:35.684948 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.28s 2026-01-02 00:36:35.684959 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.15s 2026-01-02 00:36:35.684970 | orchestrator | osism.commons.network : Include network extra init ---------------------- 1.07s 2026-01-02 00:36:35.684980 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.06s 2026-01-02 00:36:35.684991 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.06s 2026-01-02 00:36:35.685007 | orchestrator | osism.commons.network : List existing configuration files --------------- 0.99s 2026-01-02 00:36:35.978210 | orchestrator | + osism apply wireguard 2026-01-02 00:36:48.127656 | orchestrator | 2026-01-02 00:36:48 | INFO  | Task f16d494d-70b3-44b3-87fd-8669afa803ca (wireguard) was prepared for execution. 2026-01-02 00:36:48.127805 | orchestrator | 2026-01-02 00:36:48 | INFO  | It takes a moment until task f16d494d-70b3-44b3-87fd-8669afa803ca (wireguard) has been started and output is visible here. 2026-01-02 00:37:07.868396 | orchestrator | 2026-01-02 00:37:07.868480 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2026-01-02 00:37:07.868488 | orchestrator | 2026-01-02 00:37:07.868495 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2026-01-02 00:37:07.868502 | orchestrator | Friday 02 January 2026 00:36:52 +0000 (0:00:00.164) 0:00:00.164 ******** 2026-01-02 00:37:07.868509 | orchestrator | ok: [testbed-manager] 2026-01-02 00:37:07.868517 | orchestrator | 2026-01-02 00:37:07.868523 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2026-01-02 00:37:07.868529 | orchestrator | Friday 02 January 2026 00:36:53 +0000 (0:00:01.389) 0:00:01.553 ******** 2026-01-02 00:37:07.868536 | orchestrator | changed: [testbed-manager] 2026-01-02 00:37:07.868543 | orchestrator | 2026-01-02 00:37:07.868549 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2026-01-02 00:37:07.868556 | orchestrator | Friday 02 January 2026 00:37:00 +0000 (0:00:06.668) 0:00:08.222 ******** 2026-01-02 00:37:07.868563 | orchestrator | changed: [testbed-manager] 2026-01-02 00:37:07.868569 | orchestrator | 2026-01-02 00:37:07.868576 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2026-01-02 00:37:07.868582 | orchestrator | Friday 02 January 2026 00:37:00 +0000 (0:00:00.585) 0:00:08.808 ******** 2026-01-02 00:37:07.868589 | orchestrator | changed: [testbed-manager] 2026-01-02 00:37:07.868597 | orchestrator | 2026-01-02 00:37:07.868601 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2026-01-02 00:37:07.868605 | orchestrator | Friday 02 January 2026 00:37:01 +0000 (0:00:00.440) 0:00:09.248 ******** 2026-01-02 00:37:07.868609 | orchestrator | ok: [testbed-manager] 2026-01-02 00:37:07.868613 | orchestrator | 2026-01-02 00:37:07.868617 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2026-01-02 00:37:07.868621 | orchestrator | Friday 02 January 2026 00:37:01 +0000 (0:00:00.676) 0:00:09.924 ******** 2026-01-02 00:37:07.868625 | orchestrator | ok: [testbed-manager] 2026-01-02 00:37:07.868629 | orchestrator | 2026-01-02 00:37:07.868633 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2026-01-02 00:37:07.868636 | orchestrator | Friday 02 January 2026 00:37:02 +0000 (0:00:00.399) 0:00:10.324 ******** 2026-01-02 00:37:07.868641 | orchestrator | ok: [testbed-manager] 2026-01-02 00:37:07.868645 | orchestrator | 2026-01-02 00:37:07.868648 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2026-01-02 00:37:07.868652 | orchestrator | Friday 02 January 2026 00:37:02 +0000 (0:00:00.412) 0:00:10.737 ******** 2026-01-02 00:37:07.868656 | orchestrator | changed: [testbed-manager] 2026-01-02 00:37:07.868660 | orchestrator | 2026-01-02 00:37:07.868663 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2026-01-02 00:37:07.868667 | orchestrator | Friday 02 January 2026 00:37:03 +0000 (0:00:01.144) 0:00:11.881 ******** 2026-01-02 00:37:07.868671 | orchestrator | changed: [testbed-manager] => (item=None) 2026-01-02 00:37:07.868675 | orchestrator | changed: [testbed-manager] 2026-01-02 00:37:07.868679 | orchestrator | 2026-01-02 00:37:07.868683 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2026-01-02 00:37:07.868687 | orchestrator | Friday 02 January 2026 00:37:04 +0000 (0:00:00.929) 0:00:12.811 ******** 2026-01-02 00:37:07.868691 | orchestrator | changed: [testbed-manager] 2026-01-02 00:37:07.868694 | orchestrator | 2026-01-02 00:37:07.868698 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2026-01-02 00:37:07.868702 | orchestrator | Friday 02 January 2026 00:37:06 +0000 (0:00:01.690) 0:00:14.501 ******** 2026-01-02 00:37:07.868706 | orchestrator | changed: [testbed-manager] 2026-01-02 00:37:07.868709 | orchestrator | 2026-01-02 00:37:07.868713 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-02 00:37:07.868717 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-02 00:37:07.868741 | orchestrator | 2026-01-02 00:37:07.868745 | orchestrator | 2026-01-02 00:37:07.868749 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-02 00:37:07.868753 | orchestrator | Friday 02 January 2026 00:37:07 +0000 (0:00:00.932) 0:00:15.434 ******** 2026-01-02 00:37:07.868757 | orchestrator | =============================================================================== 2026-01-02 00:37:07.868761 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 6.67s 2026-01-02 00:37:07.868765 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.69s 2026-01-02 00:37:07.868769 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.39s 2026-01-02 00:37:07.868773 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.14s 2026-01-02 00:37:07.868777 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.93s 2026-01-02 00:37:07.868781 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.93s 2026-01-02 00:37:07.868785 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.68s 2026-01-02 00:37:07.868789 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.59s 2026-01-02 00:37:07.868793 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.44s 2026-01-02 00:37:07.868796 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.41s 2026-01-02 00:37:07.868800 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.40s 2026-01-02 00:37:08.268756 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2026-01-02 00:37:08.318343 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2026-01-02 00:37:08.318469 | orchestrator | Dload Upload Total Spent Left Speed 2026-01-02 00:37:08.398064 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 15 100 15 0 0 187 0 --:--:-- --:--:-- --:--:-- 189 2026-01-02 00:37:08.412492 | orchestrator | + osism apply --environment custom workarounds 2026-01-02 00:37:10.393472 | orchestrator | 2026-01-02 00:37:10 | INFO  | Trying to run play workarounds in environment custom 2026-01-02 00:37:20.579765 | orchestrator | 2026-01-02 00:37:20 | INFO  | Task 84b82b45-d812-40b5-8d15-2ccbe3cbc9a0 (workarounds) was prepared for execution. 2026-01-02 00:37:20.579879 | orchestrator | 2026-01-02 00:37:20 | INFO  | It takes a moment until task 84b82b45-d812-40b5-8d15-2ccbe3cbc9a0 (workarounds) has been started and output is visible here. 2026-01-02 00:37:44.741838 | orchestrator | 2026-01-02 00:37:44.741952 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-02 00:37:44.741970 | orchestrator | 2026-01-02 00:37:44.741982 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2026-01-02 00:37:44.741995 | orchestrator | Friday 02 January 2026 00:37:24 +0000 (0:00:00.091) 0:00:00.091 ******** 2026-01-02 00:37:44.742007 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2026-01-02 00:37:44.742074 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2026-01-02 00:37:44.742087 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2026-01-02 00:37:44.742099 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2026-01-02 00:37:44.742110 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2026-01-02 00:37:44.742122 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2026-01-02 00:37:44.742133 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2026-01-02 00:37:44.742144 | orchestrator | 2026-01-02 00:37:44.742155 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2026-01-02 00:37:44.742167 | orchestrator | 2026-01-02 00:37:44.742198 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-01-02 00:37:44.742209 | orchestrator | Friday 02 January 2026 00:37:24 +0000 (0:00:00.566) 0:00:00.658 ******** 2026-01-02 00:37:44.742221 | orchestrator | ok: [testbed-manager] 2026-01-02 00:37:44.742235 | orchestrator | 2026-01-02 00:37:44.742246 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2026-01-02 00:37:44.742257 | orchestrator | 2026-01-02 00:37:44.742268 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-01-02 00:37:44.742280 | orchestrator | Friday 02 January 2026 00:37:26 +0000 (0:00:02.205) 0:00:02.864 ******** 2026-01-02 00:37:44.742291 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:37:44.742303 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:37:44.742334 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:37:44.742346 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:37:44.742357 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:37:44.742368 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:37:44.742379 | orchestrator | 2026-01-02 00:37:44.742392 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2026-01-02 00:37:44.742405 | orchestrator | 2026-01-02 00:37:44.742417 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2026-01-02 00:37:44.742430 | orchestrator | Friday 02 January 2026 00:37:28 +0000 (0:00:01.890) 0:00:04.754 ******** 2026-01-02 00:37:44.742444 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-01-02 00:37:44.742458 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-01-02 00:37:44.742470 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-01-02 00:37:44.742483 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-01-02 00:37:44.742495 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-01-02 00:37:44.742509 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-01-02 00:37:44.742522 | orchestrator | 2026-01-02 00:37:44.742543 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2026-01-02 00:37:44.742555 | orchestrator | Friday 02 January 2026 00:37:30 +0000 (0:00:01.469) 0:00:06.224 ******** 2026-01-02 00:37:44.742566 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:37:44.742578 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:37:44.742589 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:37:44.742600 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:37:44.742611 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:37:44.742622 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:37:44.742633 | orchestrator | 2026-01-02 00:37:44.742644 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2026-01-02 00:37:44.742655 | orchestrator | Friday 02 January 2026 00:37:34 +0000 (0:00:03.981) 0:00:10.206 ******** 2026-01-02 00:37:44.742666 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:37:44.742677 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:37:44.742688 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:37:44.742699 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:37:44.742710 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:37:44.742721 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:37:44.742732 | orchestrator | 2026-01-02 00:37:44.742743 | orchestrator | PLAY [Add a workaround service] ************************************************ 2026-01-02 00:37:44.742754 | orchestrator | 2026-01-02 00:37:44.742765 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2026-01-02 00:37:44.742776 | orchestrator | Friday 02 January 2026 00:37:34 +0000 (0:00:00.668) 0:00:10.875 ******** 2026-01-02 00:37:44.742787 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:37:44.742798 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:37:44.742816 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:37:44.742827 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:37:44.742838 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:37:44.742849 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:37:44.742860 | orchestrator | changed: [testbed-manager] 2026-01-02 00:37:44.742870 | orchestrator | 2026-01-02 00:37:44.742882 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2026-01-02 00:37:44.742893 | orchestrator | Friday 02 January 2026 00:37:36 +0000 (0:00:01.520) 0:00:12.395 ******** 2026-01-02 00:37:44.742904 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:37:44.742915 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:37:44.742926 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:37:44.742937 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:37:44.742948 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:37:44.742959 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:37:44.742988 | orchestrator | changed: [testbed-manager] 2026-01-02 00:37:44.743000 | orchestrator | 2026-01-02 00:37:44.743011 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2026-01-02 00:37:44.743022 | orchestrator | Friday 02 January 2026 00:37:38 +0000 (0:00:01.536) 0:00:13.932 ******** 2026-01-02 00:37:44.743033 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:37:44.743044 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:37:44.743055 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:37:44.743066 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:37:44.743077 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:37:44.743088 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:37:44.743099 | orchestrator | ok: [testbed-manager] 2026-01-02 00:37:44.743110 | orchestrator | 2026-01-02 00:37:44.743121 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2026-01-02 00:37:44.743132 | orchestrator | Friday 02 January 2026 00:37:39 +0000 (0:00:01.585) 0:00:15.517 ******** 2026-01-02 00:37:44.743144 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:37:44.743155 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:37:44.743166 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:37:44.743176 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:37:44.743188 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:37:44.743199 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:37:44.743209 | orchestrator | changed: [testbed-manager] 2026-01-02 00:37:44.743220 | orchestrator | 2026-01-02 00:37:44.743231 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2026-01-02 00:37:44.743242 | orchestrator | Friday 02 January 2026 00:37:41 +0000 (0:00:01.810) 0:00:17.328 ******** 2026-01-02 00:37:44.743253 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:37:44.743264 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:37:44.743275 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:37:44.743286 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:37:44.743297 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:37:44.743308 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:37:44.743353 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:37:44.743365 | orchestrator | 2026-01-02 00:37:44.743376 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2026-01-02 00:37:44.743387 | orchestrator | 2026-01-02 00:37:44.743398 | orchestrator | TASK [Install python3-docker] ************************************************** 2026-01-02 00:37:44.743409 | orchestrator | Friday 02 January 2026 00:37:42 +0000 (0:00:00.596) 0:00:17.925 ******** 2026-01-02 00:37:44.743420 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:37:44.743431 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:37:44.743442 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:37:44.743453 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:37:44.743464 | orchestrator | ok: [testbed-manager] 2026-01-02 00:37:44.743475 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:37:44.743486 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:37:44.743497 | orchestrator | 2026-01-02 00:37:44.743508 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-02 00:37:44.743528 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-02 00:37:44.743540 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-02 00:37:44.743552 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-02 00:37:44.743568 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-02 00:37:44.743579 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-02 00:37:44.743590 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-02 00:37:44.743601 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-02 00:37:44.743612 | orchestrator | 2026-01-02 00:37:44.743623 | orchestrator | 2026-01-02 00:37:44.743634 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-02 00:37:44.743645 | orchestrator | Friday 02 January 2026 00:37:44 +0000 (0:00:02.686) 0:00:20.611 ******** 2026-01-02 00:37:44.743656 | orchestrator | =============================================================================== 2026-01-02 00:37:44.743667 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.98s 2026-01-02 00:37:44.743678 | orchestrator | Install python3-docker -------------------------------------------------- 2.69s 2026-01-02 00:37:44.743689 | orchestrator | Apply netplan configuration --------------------------------------------- 2.21s 2026-01-02 00:37:44.743700 | orchestrator | Apply netplan configuration --------------------------------------------- 1.89s 2026-01-02 00:37:44.743711 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.81s 2026-01-02 00:37:44.743722 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.59s 2026-01-02 00:37:44.743732 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.54s 2026-01-02 00:37:44.743743 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.52s 2026-01-02 00:37:44.743754 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.47s 2026-01-02 00:37:44.743765 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.67s 2026-01-02 00:37:44.743776 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.60s 2026-01-02 00:37:44.743794 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.57s 2026-01-02 00:37:45.419096 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2026-01-02 00:37:57.568043 | orchestrator | 2026-01-02 00:37:57 | INFO  | Task a3acedf9-0337-4466-b4c5-b123f2e9f6c1 (reboot) was prepared for execution. 2026-01-02 00:37:57.568155 | orchestrator | 2026-01-02 00:37:57 | INFO  | It takes a moment until task a3acedf9-0337-4466-b4c5-b123f2e9f6c1 (reboot) has been started and output is visible here. 2026-01-02 00:38:07.754931 | orchestrator | 2026-01-02 00:38:07.755038 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-01-02 00:38:07.755052 | orchestrator | 2026-01-02 00:38:07.755062 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-01-02 00:38:07.755071 | orchestrator | Friday 02 January 2026 00:38:01 +0000 (0:00:00.192) 0:00:00.192 ******** 2026-01-02 00:38:07.755080 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:38:07.755090 | orchestrator | 2026-01-02 00:38:07.755099 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-01-02 00:38:07.755127 | orchestrator | Friday 02 January 2026 00:38:01 +0000 (0:00:00.098) 0:00:00.291 ******** 2026-01-02 00:38:07.755136 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:38:07.755144 | orchestrator | 2026-01-02 00:38:07.755152 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-01-02 00:38:07.755161 | orchestrator | Friday 02 January 2026 00:38:02 +0000 (0:00:00.965) 0:00:01.257 ******** 2026-01-02 00:38:07.755169 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:38:07.755177 | orchestrator | 2026-01-02 00:38:07.755185 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-01-02 00:38:07.755193 | orchestrator | 2026-01-02 00:38:07.755202 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-01-02 00:38:07.755210 | orchestrator | Friday 02 January 2026 00:38:02 +0000 (0:00:00.108) 0:00:01.365 ******** 2026-01-02 00:38:07.755218 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:38:07.755226 | orchestrator | 2026-01-02 00:38:07.755234 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-01-02 00:38:07.755242 | orchestrator | Friday 02 January 2026 00:38:02 +0000 (0:00:00.100) 0:00:01.465 ******** 2026-01-02 00:38:07.755250 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:38:07.755258 | orchestrator | 2026-01-02 00:38:07.755266 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-01-02 00:38:07.755274 | orchestrator | Friday 02 January 2026 00:38:03 +0000 (0:00:00.695) 0:00:02.160 ******** 2026-01-02 00:38:07.755282 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:38:07.755290 | orchestrator | 2026-01-02 00:38:07.755350 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-01-02 00:38:07.755359 | orchestrator | 2026-01-02 00:38:07.755367 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-01-02 00:38:07.755375 | orchestrator | Friday 02 January 2026 00:38:03 +0000 (0:00:00.128) 0:00:02.289 ******** 2026-01-02 00:38:07.755383 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:38:07.755391 | orchestrator | 2026-01-02 00:38:07.755399 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-01-02 00:38:07.755407 | orchestrator | Friday 02 January 2026 00:38:04 +0000 (0:00:00.214) 0:00:02.503 ******** 2026-01-02 00:38:07.755415 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:38:07.755423 | orchestrator | 2026-01-02 00:38:07.755431 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-01-02 00:38:07.755439 | orchestrator | Friday 02 January 2026 00:38:04 +0000 (0:00:00.684) 0:00:03.188 ******** 2026-01-02 00:38:07.755460 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:38:07.755470 | orchestrator | 2026-01-02 00:38:07.755480 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-01-02 00:38:07.755490 | orchestrator | 2026-01-02 00:38:07.755499 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-01-02 00:38:07.755509 | orchestrator | Friday 02 January 2026 00:38:04 +0000 (0:00:00.122) 0:00:03.310 ******** 2026-01-02 00:38:07.755518 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:38:07.755527 | orchestrator | 2026-01-02 00:38:07.755537 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-01-02 00:38:07.755547 | orchestrator | Friday 02 January 2026 00:38:04 +0000 (0:00:00.114) 0:00:03.424 ******** 2026-01-02 00:38:07.755557 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:38:07.755566 | orchestrator | 2026-01-02 00:38:07.755577 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-01-02 00:38:07.755586 | orchestrator | Friday 02 January 2026 00:38:05 +0000 (0:00:00.687) 0:00:04.111 ******** 2026-01-02 00:38:07.755600 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:38:07.755615 | orchestrator | 2026-01-02 00:38:07.755629 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-01-02 00:38:07.755643 | orchestrator | 2026-01-02 00:38:07.755656 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-01-02 00:38:07.755671 | orchestrator | Friday 02 January 2026 00:38:05 +0000 (0:00:00.124) 0:00:04.236 ******** 2026-01-02 00:38:07.755694 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:38:07.755703 | orchestrator | 2026-01-02 00:38:07.755711 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-01-02 00:38:07.755719 | orchestrator | Friday 02 January 2026 00:38:05 +0000 (0:00:00.113) 0:00:04.350 ******** 2026-01-02 00:38:07.755727 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:38:07.755735 | orchestrator | 2026-01-02 00:38:07.755743 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-01-02 00:38:07.755751 | orchestrator | Friday 02 January 2026 00:38:06 +0000 (0:00:00.681) 0:00:05.031 ******** 2026-01-02 00:38:07.755758 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:38:07.755766 | orchestrator | 2026-01-02 00:38:07.755774 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-01-02 00:38:07.755782 | orchestrator | 2026-01-02 00:38:07.755790 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-01-02 00:38:07.755798 | orchestrator | Friday 02 January 2026 00:38:06 +0000 (0:00:00.112) 0:00:05.143 ******** 2026-01-02 00:38:07.755806 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:38:07.755814 | orchestrator | 2026-01-02 00:38:07.755822 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-01-02 00:38:07.755830 | orchestrator | Friday 02 January 2026 00:38:06 +0000 (0:00:00.098) 0:00:05.242 ******** 2026-01-02 00:38:07.755838 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:38:07.755846 | orchestrator | 2026-01-02 00:38:07.755854 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-01-02 00:38:07.755863 | orchestrator | Friday 02 January 2026 00:38:07 +0000 (0:00:00.661) 0:00:05.903 ******** 2026-01-02 00:38:07.755887 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:38:07.755896 | orchestrator | 2026-01-02 00:38:07.755904 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-02 00:38:07.755914 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-02 00:38:07.755923 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-02 00:38:07.755931 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-02 00:38:07.755939 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-02 00:38:07.755947 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-02 00:38:07.755955 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-02 00:38:07.755963 | orchestrator | 2026-01-02 00:38:07.755971 | orchestrator | 2026-01-02 00:38:07.755980 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-02 00:38:07.755988 | orchestrator | Friday 02 January 2026 00:38:07 +0000 (0:00:00.039) 0:00:05.943 ******** 2026-01-02 00:38:07.755996 | orchestrator | =============================================================================== 2026-01-02 00:38:07.756004 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.38s 2026-01-02 00:38:07.756012 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.74s 2026-01-02 00:38:07.756020 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.63s 2026-01-02 00:38:08.046970 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2026-01-02 00:38:20.135981 | orchestrator | 2026-01-02 00:38:20 | INFO  | Task de1c2b6d-06e8-4803-b2aa-2fd8c0245b3e (wait-for-connection) was prepared for execution. 2026-01-02 00:38:20.136124 | orchestrator | 2026-01-02 00:38:20 | INFO  | It takes a moment until task de1c2b6d-06e8-4803-b2aa-2fd8c0245b3e (wait-for-connection) has been started and output is visible here. 2026-01-02 00:38:35.806231 | orchestrator | 2026-01-02 00:38:35.806414 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2026-01-02 00:38:35.806432 | orchestrator | 2026-01-02 00:38:35.806443 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2026-01-02 00:38:35.806454 | orchestrator | Friday 02 January 2026 00:38:23 +0000 (0:00:00.169) 0:00:00.169 ******** 2026-01-02 00:38:35.806464 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:38:35.806477 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:38:35.806487 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:38:35.806497 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:38:35.806506 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:38:35.806516 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:38:35.806526 | orchestrator | 2026-01-02 00:38:35.806536 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-02 00:38:35.806546 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-02 00:38:35.806558 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-02 00:38:35.806568 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-02 00:38:35.806578 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-02 00:38:35.806588 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-02 00:38:35.806597 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-02 00:38:35.806607 | orchestrator | 2026-01-02 00:38:35.806617 | orchestrator | 2026-01-02 00:38:35.806626 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-02 00:38:35.806636 | orchestrator | Friday 02 January 2026 00:38:35 +0000 (0:00:11.489) 0:00:11.659 ******** 2026-01-02 00:38:35.806646 | orchestrator | =============================================================================== 2026-01-02 00:38:35.806656 | orchestrator | Wait until remote system is reachable ---------------------------------- 11.49s 2026-01-02 00:38:36.090861 | orchestrator | + osism apply hddtemp 2026-01-02 00:38:48.183048 | orchestrator | 2026-01-02 00:38:48 | INFO  | Task 5dfc73b2-fdda-4fcc-bcb6-6c49ea5fc1d6 (hddtemp) was prepared for execution. 2026-01-02 00:38:48.183186 | orchestrator | 2026-01-02 00:38:48 | INFO  | It takes a moment until task 5dfc73b2-fdda-4fcc-bcb6-6c49ea5fc1d6 (hddtemp) has been started and output is visible here. 2026-01-02 00:39:16.684734 | orchestrator | 2026-01-02 00:39:16.684854 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2026-01-02 00:39:16.684871 | orchestrator | 2026-01-02 00:39:16.684884 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2026-01-02 00:39:16.684895 | orchestrator | Friday 02 January 2026 00:38:52 +0000 (0:00:00.242) 0:00:00.242 ******** 2026-01-02 00:39:16.684907 | orchestrator | ok: [testbed-manager] 2026-01-02 00:39:16.684921 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:39:16.684933 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:39:16.684944 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:39:16.684956 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:39:16.684968 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:39:16.684980 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:39:16.684991 | orchestrator | 2026-01-02 00:39:16.685003 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2026-01-02 00:39:16.685014 | orchestrator | Friday 02 January 2026 00:38:53 +0000 (0:00:00.621) 0:00:00.863 ******** 2026-01-02 00:39:16.685054 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-02 00:39:16.685068 | orchestrator | 2026-01-02 00:39:16.685079 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2026-01-02 00:39:16.685090 | orchestrator | Friday 02 January 2026 00:38:54 +0000 (0:00:01.021) 0:00:01.884 ******** 2026-01-02 00:39:16.685101 | orchestrator | ok: [testbed-manager] 2026-01-02 00:39:16.685112 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:39:16.685123 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:39:16.685134 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:39:16.685145 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:39:16.685156 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:39:16.685167 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:39:16.685178 | orchestrator | 2026-01-02 00:39:16.685189 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2026-01-02 00:39:16.685200 | orchestrator | Friday 02 January 2026 00:38:56 +0000 (0:00:02.203) 0:00:04.088 ******** 2026-01-02 00:39:16.685211 | orchestrator | changed: [testbed-manager] 2026-01-02 00:39:16.685223 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:39:16.685234 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:39:16.685305 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:39:16.685321 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:39:16.685334 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:39:16.685347 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:39:16.685360 | orchestrator | 2026-01-02 00:39:16.685373 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2026-01-02 00:39:16.685386 | orchestrator | Friday 02 January 2026 00:38:57 +0000 (0:00:01.141) 0:00:05.230 ******** 2026-01-02 00:39:16.685400 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:39:16.685413 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:39:16.685425 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:39:16.685439 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:39:16.685451 | orchestrator | ok: [testbed-manager] 2026-01-02 00:39:16.685465 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:39:16.685477 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:39:16.685490 | orchestrator | 2026-01-02 00:39:16.685519 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2026-01-02 00:39:16.685532 | orchestrator | Friday 02 January 2026 00:38:58 +0000 (0:00:01.146) 0:00:06.376 ******** 2026-01-02 00:39:16.685545 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:39:16.685558 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:39:16.685571 | orchestrator | changed: [testbed-manager] 2026-01-02 00:39:16.685584 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:39:16.685598 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:39:16.685610 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:39:16.685624 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:39:16.685637 | orchestrator | 2026-01-02 00:39:16.685648 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2026-01-02 00:39:16.685659 | orchestrator | Friday 02 January 2026 00:38:59 +0000 (0:00:00.837) 0:00:07.214 ******** 2026-01-02 00:39:16.685670 | orchestrator | changed: [testbed-manager] 2026-01-02 00:39:16.685681 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:39:16.685693 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:39:16.685703 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:39:16.685714 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:39:16.685725 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:39:16.685736 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:39:16.685747 | orchestrator | 2026-01-02 00:39:16.685758 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2026-01-02 00:39:16.685769 | orchestrator | Friday 02 January 2026 00:39:13 +0000 (0:00:13.849) 0:00:21.063 ******** 2026-01-02 00:39:16.685789 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-02 00:39:16.685801 | orchestrator | 2026-01-02 00:39:16.685812 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2026-01-02 00:39:16.685823 | orchestrator | Friday 02 January 2026 00:39:14 +0000 (0:00:01.155) 0:00:22.219 ******** 2026-01-02 00:39:16.685833 | orchestrator | changed: [testbed-manager] 2026-01-02 00:39:16.685844 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:39:16.685855 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:39:16.685866 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:39:16.685877 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:39:16.685888 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:39:16.685899 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:39:16.685910 | orchestrator | 2026-01-02 00:39:16.685921 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-02 00:39:16.685932 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-02 00:39:16.685965 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-02 00:39:16.685977 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-02 00:39:16.685989 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-02 00:39:16.686000 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-02 00:39:16.686011 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-02 00:39:16.686092 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-02 00:39:16.686104 | orchestrator | 2026-01-02 00:39:16.686115 | orchestrator | 2026-01-02 00:39:16.686126 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-02 00:39:16.686138 | orchestrator | Friday 02 January 2026 00:39:16 +0000 (0:00:01.840) 0:00:24.059 ******** 2026-01-02 00:39:16.686151 | orchestrator | =============================================================================== 2026-01-02 00:39:16.686170 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 13.85s 2026-01-02 00:39:16.686188 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 2.20s 2026-01-02 00:39:16.686214 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.84s 2026-01-02 00:39:16.686236 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.16s 2026-01-02 00:39:16.686295 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.15s 2026-01-02 00:39:16.686314 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.14s 2026-01-02 00:39:16.686333 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.02s 2026-01-02 00:39:16.686350 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.84s 2026-01-02 00:39:16.686368 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.62s 2026-01-02 00:39:16.971960 | orchestrator | ++ semver latest 7.1.1 2026-01-02 00:39:17.027231 | orchestrator | + [[ -1 -ge 0 ]] 2026-01-02 00:39:17.027369 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-01-02 00:39:17.027384 | orchestrator | + sudo systemctl restart manager.service 2026-01-02 00:39:31.529575 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-01-02 00:39:31.529728 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-01-02 00:39:31.529752 | orchestrator | + local max_attempts=60 2026-01-02 00:39:31.529769 | orchestrator | + local name=ceph-ansible 2026-01-02 00:39:31.529783 | orchestrator | + local attempt_num=1 2026-01-02 00:39:31.529814 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-02 00:39:31.565660 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-01-02 00:39:31.565756 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-02 00:39:31.565768 | orchestrator | + sleep 5 2026-01-02 00:39:36.568106 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-02 00:39:36.603166 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-01-02 00:39:36.603290 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-02 00:39:36.603306 | orchestrator | + sleep 5 2026-01-02 00:39:41.604949 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-02 00:39:41.641851 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-01-02 00:39:41.641935 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-02 00:39:41.641952 | orchestrator | + sleep 5 2026-01-02 00:39:46.646083 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-02 00:39:46.688430 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-01-02 00:39:46.688516 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-02 00:39:46.688529 | orchestrator | + sleep 5 2026-01-02 00:39:51.693174 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-02 00:39:51.727728 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-01-02 00:39:51.727835 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-02 00:39:51.727853 | orchestrator | + sleep 5 2026-01-02 00:39:56.732584 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-02 00:39:56.761410 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-01-02 00:39:56.761492 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-02 00:39:56.761502 | orchestrator | + sleep 5 2026-01-02 00:40:01.765170 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-02 00:40:01.793864 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-01-02 00:40:01.793975 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-02 00:40:01.793993 | orchestrator | + sleep 5 2026-01-02 00:40:06.797666 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-02 00:40:06.841785 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-01-02 00:40:06.841874 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-02 00:40:06.841888 | orchestrator | + sleep 5 2026-01-02 00:40:11.843607 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-02 00:40:11.864977 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-01-02 00:40:11.865064 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-02 00:40:11.865081 | orchestrator | + sleep 5 2026-01-02 00:40:16.868121 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-02 00:40:16.902096 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-01-02 00:40:16.902183 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-02 00:40:16.902288 | orchestrator | + sleep 5 2026-01-02 00:40:21.906516 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-02 00:40:21.940140 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-01-02 00:40:21.940267 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-02 00:40:21.940286 | orchestrator | + sleep 5 2026-01-02 00:40:26.944180 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-02 00:40:26.980670 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-01-02 00:40:26.980784 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-02 00:40:26.980811 | orchestrator | + sleep 5 2026-01-02 00:40:31.984922 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-02 00:40:32.024258 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-01-02 00:40:32.024354 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-02 00:40:32.024370 | orchestrator | + sleep 5 2026-01-02 00:40:37.030364 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-02 00:40:37.061421 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-01-02 00:40:37.061514 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-01-02 00:40:37.061529 | orchestrator | + local max_attempts=60 2026-01-02 00:40:37.061573 | orchestrator | + local name=kolla-ansible 2026-01-02 00:40:37.061586 | orchestrator | + local attempt_num=1 2026-01-02 00:40:37.061860 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-01-02 00:40:37.084030 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-01-02 00:40:37.084086 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-01-02 00:40:37.084099 | orchestrator | + local max_attempts=60 2026-01-02 00:40:37.084111 | orchestrator | + local name=osism-ansible 2026-01-02 00:40:37.084122 | orchestrator | + local attempt_num=1 2026-01-02 00:40:37.084667 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-01-02 00:40:37.117331 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-01-02 00:40:37.117424 | orchestrator | + [[ true == \t\r\u\e ]] 2026-01-02 00:40:37.117438 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-01-02 00:40:37.256520 | orchestrator | ARA in ceph-ansible already disabled. 2026-01-02 00:40:37.417316 | orchestrator | ARA in kolla-ansible already disabled. 2026-01-02 00:40:37.536244 | orchestrator | ARA in osism-ansible already disabled. 2026-01-02 00:40:37.660644 | orchestrator | ARA in osism-kubernetes already disabled. 2026-01-02 00:40:37.660925 | orchestrator | + osism apply gather-facts 2026-01-02 00:40:49.594359 | orchestrator | 2026-01-02 00:40:49 | INFO  | Task 21083c7b-44a2-445c-8fa3-41f66493195a (gather-facts) was prepared for execution. 2026-01-02 00:40:49.594476 | orchestrator | 2026-01-02 00:40:49 | INFO  | It takes a moment until task 21083c7b-44a2-445c-8fa3-41f66493195a (gather-facts) has been started and output is visible here. 2026-01-02 00:41:03.148430 | orchestrator | 2026-01-02 00:41:03.148544 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-01-02 00:41:03.148561 | orchestrator | 2026-01-02 00:41:03.148574 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-01-02 00:41:03.148586 | orchestrator | Friday 02 January 2026 00:40:53 +0000 (0:00:00.159) 0:00:00.159 ******** 2026-01-02 00:41:03.148598 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:41:03.148612 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:41:03.148623 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:41:03.148634 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:41:03.148646 | orchestrator | ok: [testbed-manager] 2026-01-02 00:41:03.148657 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:41:03.148668 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:41:03.148679 | orchestrator | 2026-01-02 00:41:03.148691 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-01-02 00:41:03.148702 | orchestrator | 2026-01-02 00:41:03.148713 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-01-02 00:41:03.148724 | orchestrator | Friday 02 January 2026 00:41:02 +0000 (0:00:09.113) 0:00:09.273 ******** 2026-01-02 00:41:03.148735 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:41:03.148747 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:41:03.148758 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:41:03.148770 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:41:03.148781 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:41:03.148792 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:41:03.148803 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:41:03.148814 | orchestrator | 2026-01-02 00:41:03.148825 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-02 00:41:03.148837 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-02 00:41:03.148849 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-02 00:41:03.148860 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-02 00:41:03.148871 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-02 00:41:03.148882 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-02 00:41:03.148922 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-02 00:41:03.148933 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-02 00:41:03.148944 | orchestrator | 2026-01-02 00:41:03.148956 | orchestrator | 2026-01-02 00:41:03.148967 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-02 00:41:03.148979 | orchestrator | Friday 02 January 2026 00:41:02 +0000 (0:00:00.488) 0:00:09.762 ******** 2026-01-02 00:41:03.148993 | orchestrator | =============================================================================== 2026-01-02 00:41:03.149006 | orchestrator | Gathers facts about hosts ----------------------------------------------- 9.11s 2026-01-02 00:41:03.149021 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.49s 2026-01-02 00:41:03.432896 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2026-01-02 00:41:03.444503 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2026-01-02 00:41:03.456053 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2026-01-02 00:41:03.472478 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2026-01-02 00:41:03.486307 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2026-01-02 00:41:03.510708 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2026-01-02 00:41:03.526343 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2026-01-02 00:41:03.554371 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2026-01-02 00:41:03.554455 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2026-01-02 00:41:03.563551 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2026-01-02 00:41:03.576010 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2026-01-02 00:41:03.586423 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2026-01-02 00:41:03.598308 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2026-01-02 00:41:03.606925 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2026-01-02 00:41:03.615487 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2026-01-02 00:41:03.624063 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2026-01-02 00:41:03.632768 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2026-01-02 00:41:03.641768 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2026-01-02 00:41:03.650738 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2026-01-02 00:41:03.660485 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2026-01-02 00:41:03.670939 | orchestrator | + [[ false == \t\r\u\e ]] 2026-01-02 00:41:03.832494 | orchestrator | ok: Runtime: 0:23:45.244505 2026-01-02 00:41:03.956582 | 2026-01-02 00:41:03.956738 | TASK [Deploy services] 2026-01-02 00:41:04.490971 | orchestrator | skipping: Conditional result was False 2026-01-02 00:41:04.506952 | 2026-01-02 00:41:04.507117 | TASK [Deploy in a nutshell] 2026-01-02 00:41:05.206557 | orchestrator | + set -e 2026-01-02 00:41:05.206740 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-01-02 00:41:05.206762 | orchestrator | ++ export INTERACTIVE=false 2026-01-02 00:41:05.206784 | orchestrator | ++ INTERACTIVE=false 2026-01-02 00:41:05.206797 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-01-02 00:41:05.206811 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-01-02 00:41:05.206825 | orchestrator | + source /opt/manager-vars.sh 2026-01-02 00:41:05.206872 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-01-02 00:41:05.206901 | orchestrator | ++ NUMBER_OF_NODES=6 2026-01-02 00:41:05.206916 | orchestrator | ++ export CEPH_VERSION=reef 2026-01-02 00:41:05.206931 | orchestrator | ++ CEPH_VERSION=reef 2026-01-02 00:41:05.206943 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-01-02 00:41:05.206962 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-01-02 00:41:05.206973 | orchestrator | ++ export MANAGER_VERSION=latest 2026-01-02 00:41:05.206994 | orchestrator | ++ MANAGER_VERSION=latest 2026-01-02 00:41:05.207005 | orchestrator | ++ export OPENSTACK_VERSION=2025.1 2026-01-02 00:41:05.207021 | orchestrator | ++ OPENSTACK_VERSION=2025.1 2026-01-02 00:41:05.207032 | orchestrator | ++ export ARA=false 2026-01-02 00:41:05.207044 | orchestrator | ++ ARA=false 2026-01-02 00:41:05.207055 | orchestrator | ++ export DEPLOY_MODE=manager 2026-01-02 00:41:05.207080 | orchestrator | ++ DEPLOY_MODE=manager 2026-01-02 00:41:05.207091 | orchestrator | ++ export TEMPEST=true 2026-01-02 00:41:05.207102 | orchestrator | ++ TEMPEST=true 2026-01-02 00:41:05.207113 | orchestrator | ++ export IS_ZUUL=true 2026-01-02 00:41:05.207124 | orchestrator | ++ IS_ZUUL=true 2026-01-02 00:41:05.207135 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.159 2026-01-02 00:41:05.207147 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.159 2026-01-02 00:41:05.207157 | orchestrator | ++ export EXTERNAL_API=false 2026-01-02 00:41:05.207206 | orchestrator | ++ EXTERNAL_API=false 2026-01-02 00:41:05.207219 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-01-02 00:41:05.207230 | orchestrator | ++ IMAGE_USER=ubuntu 2026-01-02 00:41:05.207241 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-01-02 00:41:05.207252 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-01-02 00:41:05.207264 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-01-02 00:41:05.207275 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-01-02 00:41:05.207286 | orchestrator | + echo 2026-01-02 00:41:05.207297 | orchestrator | 2026-01-02 00:41:05.207309 | orchestrator | # PULL IMAGES 2026-01-02 00:41:05.207320 | orchestrator | 2026-01-02 00:41:05.207331 | orchestrator | + echo '# PULL IMAGES' 2026-01-02 00:41:05.207342 | orchestrator | + echo 2026-01-02 00:41:05.207370 | orchestrator | ++ semver latest 7.0.0 2026-01-02 00:41:05.243882 | orchestrator | + [[ -1 -ge 0 ]] 2026-01-02 00:41:05.243957 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-01-02 00:41:05.243978 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2026-01-02 00:41:07.088132 | orchestrator | 2026-01-02 00:41:07 | INFO  | Trying to run play pull-images in environment custom 2026-01-02 00:41:17.225643 | orchestrator | 2026-01-02 00:41:17 | INFO  | Task ad6f49d3-242c-43d5-b91f-8cfcf11bf692 (pull-images) was prepared for execution. 2026-01-02 00:41:17.225766 | orchestrator | 2026-01-02 00:41:17 | INFO  | Task ad6f49d3-242c-43d5-b91f-8cfcf11bf692 is running in background. No more output. Check ARA for logs. 2026-01-02 00:41:19.256238 | orchestrator | 2026-01-02 00:41:19 | INFO  | Trying to run play wipe-partitions in environment custom 2026-01-02 00:41:29.411008 | orchestrator | 2026-01-02 00:41:29 | INFO  | Task 6ed3ae63-04ce-484d-b99a-57d6373867c9 (wipe-partitions) was prepared for execution. 2026-01-02 00:41:29.411127 | orchestrator | 2026-01-02 00:41:29 | INFO  | It takes a moment until task 6ed3ae63-04ce-484d-b99a-57d6373867c9 (wipe-partitions) has been started and output is visible here. 2026-01-02 00:41:41.304027 | orchestrator | 2026-01-02 00:41:41.304200 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2026-01-02 00:41:41.304229 | orchestrator | 2026-01-02 00:41:41.304251 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2026-01-02 00:41:41.304281 | orchestrator | Friday 02 January 2026 00:41:33 +0000 (0:00:00.122) 0:00:00.122 ******** 2026-01-02 00:41:41.304304 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:41:41.304326 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:41:41.304346 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:41:41.304365 | orchestrator | 2026-01-02 00:41:41.304384 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2026-01-02 00:41:41.304438 | orchestrator | Friday 02 January 2026 00:41:33 +0000 (0:00:00.566) 0:00:00.689 ******** 2026-01-02 00:41:41.304459 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:41:41.304480 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:41:41.304505 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:41:41.304525 | orchestrator | 2026-01-02 00:41:41.304547 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2026-01-02 00:41:41.304568 | orchestrator | Friday 02 January 2026 00:41:34 +0000 (0:00:00.345) 0:00:01.035 ******** 2026-01-02 00:41:41.304589 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:41:41.304612 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:41:41.304632 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:41:41.304653 | orchestrator | 2026-01-02 00:41:41.304674 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2026-01-02 00:41:41.304695 | orchestrator | Friday 02 January 2026 00:41:34 +0000 (0:00:00.588) 0:00:01.624 ******** 2026-01-02 00:41:41.304717 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:41:41.304738 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:41:41.304759 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:41:41.304780 | orchestrator | 2026-01-02 00:41:41.304801 | orchestrator | TASK [Check device availability] *********************************************** 2026-01-02 00:41:41.304821 | orchestrator | Friday 02 January 2026 00:41:34 +0000 (0:00:00.227) 0:00:01.851 ******** 2026-01-02 00:41:41.304843 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-01-02 00:41:41.304869 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-01-02 00:41:41.304889 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-01-02 00:41:41.304910 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-01-02 00:41:41.304929 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-01-02 00:41:41.304948 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-01-02 00:41:41.304968 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-01-02 00:41:41.304987 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-01-02 00:41:41.305006 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-01-02 00:41:41.305025 | orchestrator | 2026-01-02 00:41:41.305047 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2026-01-02 00:41:41.305067 | orchestrator | Friday 02 January 2026 00:41:36 +0000 (0:00:01.186) 0:00:03.038 ******** 2026-01-02 00:41:41.305087 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2026-01-02 00:41:41.305106 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2026-01-02 00:41:41.305126 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2026-01-02 00:41:41.305174 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2026-01-02 00:41:41.305193 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2026-01-02 00:41:41.305212 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2026-01-02 00:41:41.305230 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2026-01-02 00:41:41.305247 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2026-01-02 00:41:41.305265 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2026-01-02 00:41:41.305283 | orchestrator | 2026-01-02 00:41:41.305301 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2026-01-02 00:41:41.305320 | orchestrator | Friday 02 January 2026 00:41:37 +0000 (0:00:01.462) 0:00:04.500 ******** 2026-01-02 00:41:41.305338 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-01-02 00:41:41.305356 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-01-02 00:41:41.305375 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-01-02 00:41:41.305393 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-01-02 00:41:41.305411 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-01-02 00:41:41.305439 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-01-02 00:41:41.305458 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-01-02 00:41:41.305489 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-01-02 00:41:41.305507 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-01-02 00:41:41.305525 | orchestrator | 2026-01-02 00:41:41.305544 | orchestrator | TASK [Reload udev rules] ******************************************************* 2026-01-02 00:41:41.305562 | orchestrator | Friday 02 January 2026 00:41:39 +0000 (0:00:02.124) 0:00:06.625 ******** 2026-01-02 00:41:41.305580 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:41:41.305598 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:41:41.305616 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:41:41.305634 | orchestrator | 2026-01-02 00:41:41.305652 | orchestrator | TASK [Request device events from the kernel] *********************************** 2026-01-02 00:41:41.305669 | orchestrator | Friday 02 January 2026 00:41:40 +0000 (0:00:00.623) 0:00:07.248 ******** 2026-01-02 00:41:41.305687 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:41:41.305705 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:41:41.305723 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:41:41.305740 | orchestrator | 2026-01-02 00:41:41.305757 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-02 00:41:41.305779 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-02 00:41:41.305799 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-02 00:41:41.305842 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-02 00:41:41.305862 | orchestrator | 2026-01-02 00:41:41.305880 | orchestrator | 2026-01-02 00:41:41.305898 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-02 00:41:41.305916 | orchestrator | Friday 02 January 2026 00:41:41 +0000 (0:00:00.614) 0:00:07.863 ******** 2026-01-02 00:41:41.305934 | orchestrator | =============================================================================== 2026-01-02 00:41:41.305951 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.12s 2026-01-02 00:41:41.305969 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.46s 2026-01-02 00:41:41.305986 | orchestrator | Check device availability ----------------------------------------------- 1.19s 2026-01-02 00:41:41.306005 | orchestrator | Reload udev rules ------------------------------------------------------- 0.62s 2026-01-02 00:41:41.306104 | orchestrator | Request device events from the kernel ----------------------------------- 0.61s 2026-01-02 00:41:41.306127 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.59s 2026-01-02 00:41:41.306183 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.57s 2026-01-02 00:41:41.306204 | orchestrator | Remove all rook related logical devices --------------------------------- 0.35s 2026-01-02 00:41:41.306221 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.23s 2026-01-02 00:41:53.507524 | orchestrator | 2026-01-02 00:41:53 | INFO  | Task b15b949e-fd33-431d-962a-493da37c7f9f (facts) was prepared for execution. 2026-01-02 00:41:53.507635 | orchestrator | 2026-01-02 00:41:53 | INFO  | It takes a moment until task b15b949e-fd33-431d-962a-493da37c7f9f (facts) has been started and output is visible here. 2026-01-02 00:42:04.946819 | orchestrator | 2026-01-02 00:42:04.946941 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-01-02 00:42:04.946963 | orchestrator | 2026-01-02 00:42:04.946976 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-01-02 00:42:04.946988 | orchestrator | Friday 02 January 2026 00:41:57 +0000 (0:00:00.194) 0:00:00.194 ******** 2026-01-02 00:42:04.947000 | orchestrator | ok: [testbed-manager] 2026-01-02 00:42:04.947013 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:42:04.947024 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:42:04.947062 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:42:04.947073 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:42:04.947084 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:42:04.947095 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:42:04.947106 | orchestrator | 2026-01-02 00:42:04.947120 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-01-02 00:42:04.947166 | orchestrator | Friday 02 January 2026 00:41:58 +0000 (0:00:00.894) 0:00:01.088 ******** 2026-01-02 00:42:04.947178 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:42:04.947190 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:42:04.947201 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:42:04.947212 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:42:04.947223 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:42:04.947242 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:42:04.947254 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:42:04.947265 | orchestrator | 2026-01-02 00:42:04.947276 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-01-02 00:42:04.947291 | orchestrator | 2026-01-02 00:42:04.947307 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-01-02 00:42:04.947319 | orchestrator | Friday 02 January 2026 00:41:59 +0000 (0:00:01.033) 0:00:02.122 ******** 2026-01-02 00:42:04.947330 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:42:04.947341 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:42:04.947352 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:42:04.947364 | orchestrator | ok: [testbed-manager] 2026-01-02 00:42:04.947382 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:42:04.947400 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:42:04.947412 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:42:04.947423 | orchestrator | 2026-01-02 00:42:04.947434 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-01-02 00:42:04.947445 | orchestrator | 2026-01-02 00:42:04.947456 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-01-02 00:42:04.947489 | orchestrator | Friday 02 January 2026 00:42:04 +0000 (0:00:04.769) 0:00:06.891 ******** 2026-01-02 00:42:04.947504 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:42:04.947515 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:42:04.947526 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:42:04.947537 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:42:04.947548 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:42:04.947559 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:42:04.947572 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:42:04.947590 | orchestrator | 2026-01-02 00:42:04.947601 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-02 00:42:04.947613 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-02 00:42:04.947625 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-02 00:42:04.947636 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-02 00:42:04.947647 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-02 00:42:04.947658 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-02 00:42:04.947669 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-02 00:42:04.947680 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-02 00:42:04.947691 | orchestrator | 2026-01-02 00:42:04.947711 | orchestrator | 2026-01-02 00:42:04.947723 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-02 00:42:04.947734 | orchestrator | Friday 02 January 2026 00:42:04 +0000 (0:00:00.489) 0:00:07.381 ******** 2026-01-02 00:42:04.947745 | orchestrator | =============================================================================== 2026-01-02 00:42:04.947756 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.77s 2026-01-02 00:42:04.947767 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.03s 2026-01-02 00:42:04.947778 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 0.89s 2026-01-02 00:42:04.947789 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.49s 2026-01-02 00:42:07.199831 | orchestrator | 2026-01-02 00:42:07 | INFO  | Task f90940d0-885b-4a4f-8454-c33cee454d13 (ceph-configure-lvm-volumes) was prepared for execution. 2026-01-02 00:42:07.199938 | orchestrator | 2026-01-02 00:42:07 | INFO  | It takes a moment until task f90940d0-885b-4a4f-8454-c33cee454d13 (ceph-configure-lvm-volumes) has been started and output is visible here. 2026-01-02 00:42:17.166991 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-01-02 00:42:17.167113 | orchestrator | 2.16.14 2026-01-02 00:42:17.167160 | orchestrator | 2026-01-02 00:42:17.167174 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-01-02 00:42:17.167187 | orchestrator | 2026-01-02 00:42:17.167201 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-01-02 00:42:17.167213 | orchestrator | Friday 02 January 2026 00:42:11 +0000 (0:00:00.230) 0:00:00.230 ******** 2026-01-02 00:42:17.167225 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-01-02 00:42:17.167236 | orchestrator | 2026-01-02 00:42:17.167247 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-01-02 00:42:17.167258 | orchestrator | Friday 02 January 2026 00:42:11 +0000 (0:00:00.209) 0:00:00.439 ******** 2026-01-02 00:42:17.167269 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:42:17.167281 | orchestrator | 2026-01-02 00:42:17.167292 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-02 00:42:17.167303 | orchestrator | Friday 02 January 2026 00:42:11 +0000 (0:00:00.180) 0:00:00.620 ******** 2026-01-02 00:42:17.167315 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-01-02 00:42:17.167326 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-01-02 00:42:17.167337 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-01-02 00:42:17.167348 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-01-02 00:42:17.167359 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-01-02 00:42:17.167370 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-01-02 00:42:17.167381 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-01-02 00:42:17.167392 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-01-02 00:42:17.167403 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-01-02 00:42:17.167414 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-01-02 00:42:17.167433 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-01-02 00:42:17.167445 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-01-02 00:42:17.167456 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-01-02 00:42:17.167467 | orchestrator | 2026-01-02 00:42:17.167478 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-02 00:42:17.167513 | orchestrator | Friday 02 January 2026 00:42:11 +0000 (0:00:00.379) 0:00:01.000 ******** 2026-01-02 00:42:17.167527 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:42:17.167541 | orchestrator | 2026-01-02 00:42:17.167554 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-02 00:42:17.167567 | orchestrator | Friday 02 January 2026 00:42:12 +0000 (0:00:00.165) 0:00:01.166 ******** 2026-01-02 00:42:17.167579 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:42:17.167592 | orchestrator | 2026-01-02 00:42:17.167605 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-02 00:42:17.167618 | orchestrator | Friday 02 January 2026 00:42:12 +0000 (0:00:00.167) 0:00:01.333 ******** 2026-01-02 00:42:17.167631 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:42:17.167644 | orchestrator | 2026-01-02 00:42:17.167657 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-02 00:42:17.167673 | orchestrator | Friday 02 January 2026 00:42:12 +0000 (0:00:00.167) 0:00:01.501 ******** 2026-01-02 00:42:17.167686 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:42:17.167699 | orchestrator | 2026-01-02 00:42:17.167711 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-02 00:42:17.167725 | orchestrator | Friday 02 January 2026 00:42:12 +0000 (0:00:00.170) 0:00:01.672 ******** 2026-01-02 00:42:17.167738 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:42:17.167751 | orchestrator | 2026-01-02 00:42:17.167764 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-02 00:42:17.167776 | orchestrator | Friday 02 January 2026 00:42:12 +0000 (0:00:00.166) 0:00:01.839 ******** 2026-01-02 00:42:17.167789 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:42:17.167801 | orchestrator | 2026-01-02 00:42:17.167815 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-02 00:42:17.167827 | orchestrator | Friday 02 January 2026 00:42:12 +0000 (0:00:00.176) 0:00:02.015 ******** 2026-01-02 00:42:17.167840 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:42:17.167854 | orchestrator | 2026-01-02 00:42:17.167867 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-02 00:42:17.167880 | orchestrator | Friday 02 January 2026 00:42:13 +0000 (0:00:00.181) 0:00:02.197 ******** 2026-01-02 00:42:17.167894 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:42:17.167906 | orchestrator | 2026-01-02 00:42:17.167917 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-02 00:42:17.167928 | orchestrator | Friday 02 January 2026 00:42:13 +0000 (0:00:00.178) 0:00:02.376 ******** 2026-01-02 00:42:17.167940 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_305350a8-4399-44d3-b2ea-bbf7b9eb7f90) 2026-01-02 00:42:17.167952 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_305350a8-4399-44d3-b2ea-bbf7b9eb7f90) 2026-01-02 00:42:17.167963 | orchestrator | 2026-01-02 00:42:17.167974 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-02 00:42:17.168004 | orchestrator | Friday 02 January 2026 00:42:13 +0000 (0:00:00.368) 0:00:02.744 ******** 2026-01-02 00:42:17.168016 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_610525bf-123e-48f5-8f72-a088231f73d4) 2026-01-02 00:42:17.168028 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_610525bf-123e-48f5-8f72-a088231f73d4) 2026-01-02 00:42:17.168039 | orchestrator | 2026-01-02 00:42:17.168050 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-02 00:42:17.168061 | orchestrator | Friday 02 January 2026 00:42:14 +0000 (0:00:00.510) 0:00:03.255 ******** 2026-01-02 00:42:17.168072 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_d0e027c6-7483-4a58-a550-b5020c348e91) 2026-01-02 00:42:17.168083 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_d0e027c6-7483-4a58-a550-b5020c348e91) 2026-01-02 00:42:17.168094 | orchestrator | 2026-01-02 00:42:17.168105 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-02 00:42:17.168185 | orchestrator | Friday 02 January 2026 00:42:14 +0000 (0:00:00.496) 0:00:03.752 ******** 2026-01-02 00:42:17.168199 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_88e6ca38-e9bc-414f-be79-2564fe6ee507) 2026-01-02 00:42:17.168210 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_88e6ca38-e9bc-414f-be79-2564fe6ee507) 2026-01-02 00:42:17.168221 | orchestrator | 2026-01-02 00:42:17.168232 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-02 00:42:17.168243 | orchestrator | Friday 02 January 2026 00:42:15 +0000 (0:00:00.617) 0:00:04.370 ******** 2026-01-02 00:42:17.168254 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-01-02 00:42:17.168265 | orchestrator | 2026-01-02 00:42:17.168282 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-02 00:42:17.168293 | orchestrator | Friday 02 January 2026 00:42:15 +0000 (0:00:00.290) 0:00:04.660 ******** 2026-01-02 00:42:17.168304 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-01-02 00:42:17.168315 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-01-02 00:42:17.168326 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-01-02 00:42:17.168337 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-01-02 00:42:17.168347 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-01-02 00:42:17.168358 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-01-02 00:42:17.168369 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-01-02 00:42:17.168380 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-01-02 00:42:17.168391 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-01-02 00:42:17.168401 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-01-02 00:42:17.168412 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-01-02 00:42:17.168423 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-01-02 00:42:17.168433 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-01-02 00:42:17.168444 | orchestrator | 2026-01-02 00:42:17.168455 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-02 00:42:17.168466 | orchestrator | Friday 02 January 2026 00:42:15 +0000 (0:00:00.335) 0:00:04.996 ******** 2026-01-02 00:42:17.168477 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:42:17.168488 | orchestrator | 2026-01-02 00:42:17.168499 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-02 00:42:17.168509 | orchestrator | Friday 02 January 2026 00:42:16 +0000 (0:00:00.191) 0:00:05.188 ******** 2026-01-02 00:42:17.168520 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:42:17.168531 | orchestrator | 2026-01-02 00:42:17.168542 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-02 00:42:17.168553 | orchestrator | Friday 02 January 2026 00:42:16 +0000 (0:00:00.181) 0:00:05.370 ******** 2026-01-02 00:42:17.168563 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:42:17.168574 | orchestrator | 2026-01-02 00:42:17.168585 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-02 00:42:17.168596 | orchestrator | Friday 02 January 2026 00:42:16 +0000 (0:00:00.168) 0:00:05.538 ******** 2026-01-02 00:42:17.168607 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:42:17.168618 | orchestrator | 2026-01-02 00:42:17.168628 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-02 00:42:17.168639 | orchestrator | Friday 02 January 2026 00:42:16 +0000 (0:00:00.173) 0:00:05.712 ******** 2026-01-02 00:42:17.168657 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:42:17.168668 | orchestrator | 2026-01-02 00:42:17.168679 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-02 00:42:17.168689 | orchestrator | Friday 02 January 2026 00:42:16 +0000 (0:00:00.176) 0:00:05.889 ******** 2026-01-02 00:42:17.168700 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:42:17.168711 | orchestrator | 2026-01-02 00:42:17.168722 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-02 00:42:17.168733 | orchestrator | Friday 02 January 2026 00:42:16 +0000 (0:00:00.183) 0:00:06.072 ******** 2026-01-02 00:42:17.168743 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:42:17.168754 | orchestrator | 2026-01-02 00:42:17.168772 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-02 00:42:24.019939 | orchestrator | Friday 02 January 2026 00:42:17 +0000 (0:00:00.172) 0:00:06.245 ******** 2026-01-02 00:42:24.020039 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:42:24.020054 | orchestrator | 2026-01-02 00:42:24.020064 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-02 00:42:24.020074 | orchestrator | Friday 02 January 2026 00:42:17 +0000 (0:00:00.174) 0:00:06.419 ******** 2026-01-02 00:42:24.020084 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-01-02 00:42:24.020094 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-01-02 00:42:24.020104 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-01-02 00:42:24.020113 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-01-02 00:42:24.020199 | orchestrator | 2026-01-02 00:42:24.020211 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-02 00:42:24.020220 | orchestrator | Friday 02 January 2026 00:42:18 +0000 (0:00:00.771) 0:00:07.190 ******** 2026-01-02 00:42:24.020229 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:42:24.020238 | orchestrator | 2026-01-02 00:42:24.020247 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-02 00:42:24.020256 | orchestrator | Friday 02 January 2026 00:42:18 +0000 (0:00:00.196) 0:00:07.387 ******** 2026-01-02 00:42:24.020265 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:42:24.020274 | orchestrator | 2026-01-02 00:42:24.020283 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-02 00:42:24.020292 | orchestrator | Friday 02 January 2026 00:42:18 +0000 (0:00:00.188) 0:00:07.576 ******** 2026-01-02 00:42:24.020301 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:42:24.020310 | orchestrator | 2026-01-02 00:42:24.020319 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-02 00:42:24.020328 | orchestrator | Friday 02 January 2026 00:42:18 +0000 (0:00:00.192) 0:00:07.768 ******** 2026-01-02 00:42:24.020336 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:42:24.020345 | orchestrator | 2026-01-02 00:42:24.020354 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-01-02 00:42:24.020363 | orchestrator | Friday 02 January 2026 00:42:18 +0000 (0:00:00.185) 0:00:07.953 ******** 2026-01-02 00:42:24.020372 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2026-01-02 00:42:24.020380 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2026-01-02 00:42:24.020389 | orchestrator | 2026-01-02 00:42:24.020417 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-01-02 00:42:24.020426 | orchestrator | Friday 02 January 2026 00:42:19 +0000 (0:00:00.167) 0:00:08.121 ******** 2026-01-02 00:42:24.020435 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:42:24.020444 | orchestrator | 2026-01-02 00:42:24.020453 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-01-02 00:42:24.020462 | orchestrator | Friday 02 January 2026 00:42:19 +0000 (0:00:00.120) 0:00:08.241 ******** 2026-01-02 00:42:24.020470 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:42:24.020479 | orchestrator | 2026-01-02 00:42:24.020488 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-01-02 00:42:24.020518 | orchestrator | Friday 02 January 2026 00:42:19 +0000 (0:00:00.131) 0:00:08.373 ******** 2026-01-02 00:42:24.020527 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:42:24.020536 | orchestrator | 2026-01-02 00:42:24.020545 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-01-02 00:42:24.020553 | orchestrator | Friday 02 January 2026 00:42:19 +0000 (0:00:00.130) 0:00:08.503 ******** 2026-01-02 00:42:24.020562 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:42:24.020572 | orchestrator | 2026-01-02 00:42:24.020580 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-01-02 00:42:24.020589 | orchestrator | Friday 02 January 2026 00:42:19 +0000 (0:00:00.132) 0:00:08.635 ******** 2026-01-02 00:42:24.020598 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'fa5ccc98-5ec0-5843-b525-cc12dffb9804'}}) 2026-01-02 00:42:24.020608 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '1e1b73ff-0d48-5f4d-91db-a8c1f08fc0ce'}}) 2026-01-02 00:42:24.020617 | orchestrator | 2026-01-02 00:42:24.020625 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-01-02 00:42:24.020635 | orchestrator | Friday 02 January 2026 00:42:19 +0000 (0:00:00.154) 0:00:08.790 ******** 2026-01-02 00:42:24.020644 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'fa5ccc98-5ec0-5843-b525-cc12dffb9804'}})  2026-01-02 00:42:24.020659 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '1e1b73ff-0d48-5f4d-91db-a8c1f08fc0ce'}})  2026-01-02 00:42:24.020669 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:42:24.020677 | orchestrator | 2026-01-02 00:42:24.020686 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-01-02 00:42:24.020695 | orchestrator | Friday 02 January 2026 00:42:19 +0000 (0:00:00.143) 0:00:08.934 ******** 2026-01-02 00:42:24.020704 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'fa5ccc98-5ec0-5843-b525-cc12dffb9804'}})  2026-01-02 00:42:24.020713 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '1e1b73ff-0d48-5f4d-91db-a8c1f08fc0ce'}})  2026-01-02 00:42:24.020721 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:42:24.020730 | orchestrator | 2026-01-02 00:42:24.020739 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-01-02 00:42:24.020748 | orchestrator | Friday 02 January 2026 00:42:20 +0000 (0:00:00.320) 0:00:09.255 ******** 2026-01-02 00:42:24.020756 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'fa5ccc98-5ec0-5843-b525-cc12dffb9804'}})  2026-01-02 00:42:24.020782 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '1e1b73ff-0d48-5f4d-91db-a8c1f08fc0ce'}})  2026-01-02 00:42:24.020792 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:42:24.020801 | orchestrator | 2026-01-02 00:42:24.020810 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-01-02 00:42:24.020823 | orchestrator | Friday 02 January 2026 00:42:20 +0000 (0:00:00.161) 0:00:09.416 ******** 2026-01-02 00:42:24.020833 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:42:24.020841 | orchestrator | 2026-01-02 00:42:24.020850 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-01-02 00:42:24.020859 | orchestrator | Friday 02 January 2026 00:42:20 +0000 (0:00:00.124) 0:00:09.540 ******** 2026-01-02 00:42:24.020868 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:42:24.020876 | orchestrator | 2026-01-02 00:42:24.020885 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-01-02 00:42:24.020894 | orchestrator | Friday 02 January 2026 00:42:20 +0000 (0:00:00.126) 0:00:09.667 ******** 2026-01-02 00:42:24.020902 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:42:24.020911 | orchestrator | 2026-01-02 00:42:24.020920 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-01-02 00:42:24.020929 | orchestrator | Friday 02 January 2026 00:42:20 +0000 (0:00:00.128) 0:00:09.796 ******** 2026-01-02 00:42:24.020944 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:42:24.020953 | orchestrator | 2026-01-02 00:42:24.020961 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-01-02 00:42:24.020970 | orchestrator | Friday 02 January 2026 00:42:20 +0000 (0:00:00.126) 0:00:09.922 ******** 2026-01-02 00:42:24.020979 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:42:24.020988 | orchestrator | 2026-01-02 00:42:24.020996 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-01-02 00:42:24.021005 | orchestrator | Friday 02 January 2026 00:42:20 +0000 (0:00:00.131) 0:00:10.054 ******** 2026-01-02 00:42:24.021014 | orchestrator | ok: [testbed-node-3] => { 2026-01-02 00:42:24.021023 | orchestrator |  "ceph_osd_devices": { 2026-01-02 00:42:24.021032 | orchestrator |  "sdb": { 2026-01-02 00:42:24.021041 | orchestrator |  "osd_lvm_uuid": "fa5ccc98-5ec0-5843-b525-cc12dffb9804" 2026-01-02 00:42:24.021050 | orchestrator |  }, 2026-01-02 00:42:24.021058 | orchestrator |  "sdc": { 2026-01-02 00:42:24.021067 | orchestrator |  "osd_lvm_uuid": "1e1b73ff-0d48-5f4d-91db-a8c1f08fc0ce" 2026-01-02 00:42:24.021076 | orchestrator |  } 2026-01-02 00:42:24.021085 | orchestrator |  } 2026-01-02 00:42:24.021094 | orchestrator | } 2026-01-02 00:42:24.021103 | orchestrator | 2026-01-02 00:42:24.021111 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-01-02 00:42:24.021146 | orchestrator | Friday 02 January 2026 00:42:21 +0000 (0:00:00.133) 0:00:10.188 ******** 2026-01-02 00:42:24.021161 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:42:24.021177 | orchestrator | 2026-01-02 00:42:24.021192 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-01-02 00:42:24.021206 | orchestrator | Friday 02 January 2026 00:42:21 +0000 (0:00:00.134) 0:00:10.323 ******** 2026-01-02 00:42:24.021216 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:42:24.021225 | orchestrator | 2026-01-02 00:42:24.021233 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-01-02 00:42:24.021242 | orchestrator | Friday 02 January 2026 00:42:21 +0000 (0:00:00.115) 0:00:10.438 ******** 2026-01-02 00:42:24.021251 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:42:24.021260 | orchestrator | 2026-01-02 00:42:24.021269 | orchestrator | TASK [Print configuration data] ************************************************ 2026-01-02 00:42:24.021277 | orchestrator | Friday 02 January 2026 00:42:21 +0000 (0:00:00.125) 0:00:10.564 ******** 2026-01-02 00:42:24.021286 | orchestrator | changed: [testbed-node-3] => { 2026-01-02 00:42:24.021295 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-01-02 00:42:24.021304 | orchestrator |  "ceph_osd_devices": { 2026-01-02 00:42:24.021313 | orchestrator |  "sdb": { 2026-01-02 00:42:24.021322 | orchestrator |  "osd_lvm_uuid": "fa5ccc98-5ec0-5843-b525-cc12dffb9804" 2026-01-02 00:42:24.021331 | orchestrator |  }, 2026-01-02 00:42:24.021339 | orchestrator |  "sdc": { 2026-01-02 00:42:24.021348 | orchestrator |  "osd_lvm_uuid": "1e1b73ff-0d48-5f4d-91db-a8c1f08fc0ce" 2026-01-02 00:42:24.021357 | orchestrator |  } 2026-01-02 00:42:24.021366 | orchestrator |  }, 2026-01-02 00:42:24.021374 | orchestrator |  "lvm_volumes": [ 2026-01-02 00:42:24.021383 | orchestrator |  { 2026-01-02 00:42:24.021392 | orchestrator |  "data": "osd-block-fa5ccc98-5ec0-5843-b525-cc12dffb9804", 2026-01-02 00:42:24.021401 | orchestrator |  "data_vg": "ceph-fa5ccc98-5ec0-5843-b525-cc12dffb9804" 2026-01-02 00:42:24.021410 | orchestrator |  }, 2026-01-02 00:42:24.021418 | orchestrator |  { 2026-01-02 00:42:24.021427 | orchestrator |  "data": "osd-block-1e1b73ff-0d48-5f4d-91db-a8c1f08fc0ce", 2026-01-02 00:42:24.021436 | orchestrator |  "data_vg": "ceph-1e1b73ff-0d48-5f4d-91db-a8c1f08fc0ce" 2026-01-02 00:42:24.021450 | orchestrator |  } 2026-01-02 00:42:24.021459 | orchestrator |  ] 2026-01-02 00:42:24.021468 | orchestrator |  } 2026-01-02 00:42:24.021483 | orchestrator | } 2026-01-02 00:42:24.021492 | orchestrator | 2026-01-02 00:42:24.021500 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-01-02 00:42:24.021509 | orchestrator | Friday 02 January 2026 00:42:21 +0000 (0:00:00.357) 0:00:10.922 ******** 2026-01-02 00:42:24.021552 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-01-02 00:42:24.021564 | orchestrator | 2026-01-02 00:42:24.021573 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-01-02 00:42:24.021581 | orchestrator | 2026-01-02 00:42:24.021590 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-01-02 00:42:24.021599 | orchestrator | Friday 02 January 2026 00:42:23 +0000 (0:00:01.697) 0:00:12.620 ******** 2026-01-02 00:42:24.021608 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-01-02 00:42:24.021616 | orchestrator | 2026-01-02 00:42:24.021625 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-01-02 00:42:24.021634 | orchestrator | Friday 02 January 2026 00:42:23 +0000 (0:00:00.240) 0:00:12.860 ******** 2026-01-02 00:42:24.021643 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:42:24.021651 | orchestrator | 2026-01-02 00:42:24.021667 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-02 00:42:31.391610 | orchestrator | Friday 02 January 2026 00:42:24 +0000 (0:00:00.236) 0:00:13.096 ******** 2026-01-02 00:42:31.392382 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-01-02 00:42:31.392407 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-01-02 00:42:31.392414 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-01-02 00:42:31.392422 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-01-02 00:42:31.392429 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-01-02 00:42:31.392436 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-01-02 00:42:31.392443 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-01-02 00:42:31.392450 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-01-02 00:42:31.392456 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-01-02 00:42:31.392463 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-01-02 00:42:31.392469 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-01-02 00:42:31.392480 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-01-02 00:42:31.392487 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-01-02 00:42:31.392494 | orchestrator | 2026-01-02 00:42:31.392502 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-02 00:42:31.392509 | orchestrator | Friday 02 January 2026 00:42:24 +0000 (0:00:00.376) 0:00:13.472 ******** 2026-01-02 00:42:31.392516 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:42:31.392524 | orchestrator | 2026-01-02 00:42:31.392531 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-02 00:42:31.392537 | orchestrator | Friday 02 January 2026 00:42:24 +0000 (0:00:00.186) 0:00:13.659 ******** 2026-01-02 00:42:31.392544 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:42:31.392551 | orchestrator | 2026-01-02 00:42:31.392557 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-02 00:42:31.392564 | orchestrator | Friday 02 January 2026 00:42:24 +0000 (0:00:00.187) 0:00:13.846 ******** 2026-01-02 00:42:31.392571 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:42:31.392577 | orchestrator | 2026-01-02 00:42:31.392584 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-02 00:42:31.392610 | orchestrator | Friday 02 January 2026 00:42:24 +0000 (0:00:00.177) 0:00:14.024 ******** 2026-01-02 00:42:31.392617 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:42:31.392624 | orchestrator | 2026-01-02 00:42:31.392630 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-02 00:42:31.392637 | orchestrator | Friday 02 January 2026 00:42:25 +0000 (0:00:00.186) 0:00:14.210 ******** 2026-01-02 00:42:31.392643 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:42:31.392650 | orchestrator | 2026-01-02 00:42:31.392656 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-02 00:42:31.392663 | orchestrator | Friday 02 January 2026 00:42:25 +0000 (0:00:00.499) 0:00:14.709 ******** 2026-01-02 00:42:31.392670 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:42:31.392676 | orchestrator | 2026-01-02 00:42:31.392696 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-02 00:42:31.392703 | orchestrator | Friday 02 January 2026 00:42:25 +0000 (0:00:00.196) 0:00:14.906 ******** 2026-01-02 00:42:31.392710 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:42:31.392717 | orchestrator | 2026-01-02 00:42:31.392723 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-02 00:42:31.392730 | orchestrator | Friday 02 January 2026 00:42:26 +0000 (0:00:00.183) 0:00:15.090 ******** 2026-01-02 00:42:31.392736 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:42:31.392743 | orchestrator | 2026-01-02 00:42:31.392749 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-02 00:42:31.392756 | orchestrator | Friday 02 January 2026 00:42:26 +0000 (0:00:00.191) 0:00:15.281 ******** 2026-01-02 00:42:31.392762 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_33c14286-c543-4ffd-bb9f-b1db90f604b2) 2026-01-02 00:42:31.392771 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_33c14286-c543-4ffd-bb9f-b1db90f604b2) 2026-01-02 00:42:31.392777 | orchestrator | 2026-01-02 00:42:31.392784 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-02 00:42:31.392790 | orchestrator | Friday 02 January 2026 00:42:26 +0000 (0:00:00.403) 0:00:15.685 ******** 2026-01-02 00:42:31.392797 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_a863269e-8a4c-456a-8159-1ce463f39daf) 2026-01-02 00:42:31.392804 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_a863269e-8a4c-456a-8159-1ce463f39daf) 2026-01-02 00:42:31.392810 | orchestrator | 2026-01-02 00:42:31.392817 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-02 00:42:31.392823 | orchestrator | Friday 02 January 2026 00:42:26 +0000 (0:00:00.390) 0:00:16.075 ******** 2026-01-02 00:42:31.392830 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_2fd5b446-fd37-4cff-9553-5df2f9404005) 2026-01-02 00:42:31.392837 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_2fd5b446-fd37-4cff-9553-5df2f9404005) 2026-01-02 00:42:31.392843 | orchestrator | 2026-01-02 00:42:31.392850 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-02 00:42:31.392873 | orchestrator | Friday 02 January 2026 00:42:27 +0000 (0:00:00.399) 0:00:16.475 ******** 2026-01-02 00:42:31.392880 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_1ff5c3cd-25d4-4cde-ba93-f5b1aecf565f) 2026-01-02 00:42:31.392887 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_1ff5c3cd-25d4-4cde-ba93-f5b1aecf565f) 2026-01-02 00:42:31.392894 | orchestrator | 2026-01-02 00:42:31.392901 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-02 00:42:31.392907 | orchestrator | Friday 02 January 2026 00:42:27 +0000 (0:00:00.398) 0:00:16.873 ******** 2026-01-02 00:42:31.392914 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-01-02 00:42:31.392921 | orchestrator | 2026-01-02 00:42:31.392927 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-02 00:42:31.392934 | orchestrator | Friday 02 January 2026 00:42:28 +0000 (0:00:00.310) 0:00:17.184 ******** 2026-01-02 00:42:31.392952 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-01-02 00:42:31.392959 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-01-02 00:42:31.392965 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-01-02 00:42:31.392972 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-01-02 00:42:31.392979 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-01-02 00:42:31.392985 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-01-02 00:42:31.392992 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-01-02 00:42:31.392998 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-01-02 00:42:31.393005 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-01-02 00:42:31.393011 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-01-02 00:42:31.393018 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-01-02 00:42:31.393024 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-01-02 00:42:31.393030 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-01-02 00:42:31.393037 | orchestrator | 2026-01-02 00:42:31.393044 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-02 00:42:31.393050 | orchestrator | Friday 02 January 2026 00:42:28 +0000 (0:00:00.365) 0:00:17.550 ******** 2026-01-02 00:42:31.393057 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:42:31.393063 | orchestrator | 2026-01-02 00:42:31.393070 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-02 00:42:31.393081 | orchestrator | Friday 02 January 2026 00:42:29 +0000 (0:00:00.569) 0:00:18.119 ******** 2026-01-02 00:42:31.393088 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:42:31.393094 | orchestrator | 2026-01-02 00:42:31.393101 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-02 00:42:31.393108 | orchestrator | Friday 02 January 2026 00:42:29 +0000 (0:00:00.190) 0:00:18.309 ******** 2026-01-02 00:42:31.393144 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:42:31.393157 | orchestrator | 2026-01-02 00:42:31.393169 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-02 00:42:31.393180 | orchestrator | Friday 02 January 2026 00:42:29 +0000 (0:00:00.199) 0:00:18.508 ******** 2026-01-02 00:42:31.393189 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:42:31.393196 | orchestrator | 2026-01-02 00:42:31.393203 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-02 00:42:31.393210 | orchestrator | Friday 02 January 2026 00:42:29 +0000 (0:00:00.182) 0:00:18.691 ******** 2026-01-02 00:42:31.393216 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:42:31.393223 | orchestrator | 2026-01-02 00:42:31.393229 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-02 00:42:31.393236 | orchestrator | Friday 02 January 2026 00:42:29 +0000 (0:00:00.198) 0:00:18.890 ******** 2026-01-02 00:42:31.393243 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:42:31.393249 | orchestrator | 2026-01-02 00:42:31.393256 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-02 00:42:31.393263 | orchestrator | Friday 02 January 2026 00:42:30 +0000 (0:00:00.203) 0:00:19.093 ******** 2026-01-02 00:42:31.393269 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:42:31.393276 | orchestrator | 2026-01-02 00:42:31.393283 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-02 00:42:31.393289 | orchestrator | Friday 02 January 2026 00:42:30 +0000 (0:00:00.190) 0:00:19.284 ******** 2026-01-02 00:42:31.393301 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:42:31.393308 | orchestrator | 2026-01-02 00:42:31.393314 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-02 00:42:31.393321 | orchestrator | Friday 02 January 2026 00:42:30 +0000 (0:00:00.206) 0:00:19.491 ******** 2026-01-02 00:42:31.393328 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-01-02 00:42:31.393335 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-01-02 00:42:31.393342 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-01-02 00:42:31.393349 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-01-02 00:42:31.393355 | orchestrator | 2026-01-02 00:42:31.393362 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-02 00:42:31.393369 | orchestrator | Friday 02 January 2026 00:42:31 +0000 (0:00:00.776) 0:00:20.267 ******** 2026-01-02 00:42:31.393376 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:42:37.545950 | orchestrator | 2026-01-02 00:42:37.546155 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-02 00:42:37.546177 | orchestrator | Friday 02 January 2026 00:42:31 +0000 (0:00:00.201) 0:00:20.468 ******** 2026-01-02 00:42:37.546189 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:42:37.546202 | orchestrator | 2026-01-02 00:42:37.546213 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-02 00:42:37.546225 | orchestrator | Friday 02 January 2026 00:42:31 +0000 (0:00:00.193) 0:00:20.662 ******** 2026-01-02 00:42:37.546237 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:42:37.546248 | orchestrator | 2026-01-02 00:42:37.546259 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-02 00:42:37.546270 | orchestrator | Friday 02 January 2026 00:42:31 +0000 (0:00:00.220) 0:00:20.883 ******** 2026-01-02 00:42:37.546281 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:42:37.546292 | orchestrator | 2026-01-02 00:42:37.546303 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-01-02 00:42:37.546314 | orchestrator | Friday 02 January 2026 00:42:32 +0000 (0:00:00.642) 0:00:21.525 ******** 2026-01-02 00:42:37.546325 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2026-01-02 00:42:37.546336 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2026-01-02 00:42:37.546347 | orchestrator | 2026-01-02 00:42:37.546358 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-01-02 00:42:37.546369 | orchestrator | Friday 02 January 2026 00:42:32 +0000 (0:00:00.177) 0:00:21.702 ******** 2026-01-02 00:42:37.546380 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:42:37.546391 | orchestrator | 2026-01-02 00:42:37.546402 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-01-02 00:42:37.546413 | orchestrator | Friday 02 January 2026 00:42:32 +0000 (0:00:00.139) 0:00:21.842 ******** 2026-01-02 00:42:37.546424 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:42:37.546435 | orchestrator | 2026-01-02 00:42:37.546446 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-01-02 00:42:37.546459 | orchestrator | Friday 02 January 2026 00:42:32 +0000 (0:00:00.151) 0:00:21.994 ******** 2026-01-02 00:42:37.546472 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:42:37.546486 | orchestrator | 2026-01-02 00:42:37.546498 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-01-02 00:42:37.546514 | orchestrator | Friday 02 January 2026 00:42:33 +0000 (0:00:00.149) 0:00:22.144 ******** 2026-01-02 00:42:37.546533 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:42:37.546552 | orchestrator | 2026-01-02 00:42:37.546571 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-01-02 00:42:37.546590 | orchestrator | Friday 02 January 2026 00:42:33 +0000 (0:00:00.185) 0:00:22.330 ******** 2026-01-02 00:42:37.546604 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '319da19b-b53c-570d-92cc-c377bf830026'}}) 2026-01-02 00:42:37.546619 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'aabdb1ab-3cea-5cae-90fa-5f0cfaabc1a0'}}) 2026-01-02 00:42:37.546661 | orchestrator | 2026-01-02 00:42:37.546674 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-01-02 00:42:37.546687 | orchestrator | Friday 02 January 2026 00:42:33 +0000 (0:00:00.148) 0:00:22.479 ******** 2026-01-02 00:42:37.546701 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '319da19b-b53c-570d-92cc-c377bf830026'}})  2026-01-02 00:42:37.546732 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'aabdb1ab-3cea-5cae-90fa-5f0cfaabc1a0'}})  2026-01-02 00:42:37.546746 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:42:37.546759 | orchestrator | 2026-01-02 00:42:37.546772 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-01-02 00:42:37.546785 | orchestrator | Friday 02 January 2026 00:42:33 +0000 (0:00:00.145) 0:00:22.624 ******** 2026-01-02 00:42:37.546798 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '319da19b-b53c-570d-92cc-c377bf830026'}})  2026-01-02 00:42:37.546811 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'aabdb1ab-3cea-5cae-90fa-5f0cfaabc1a0'}})  2026-01-02 00:42:37.546824 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:42:37.546835 | orchestrator | 2026-01-02 00:42:37.546846 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-01-02 00:42:37.546857 | orchestrator | Friday 02 January 2026 00:42:33 +0000 (0:00:00.140) 0:00:22.765 ******** 2026-01-02 00:42:37.546868 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '319da19b-b53c-570d-92cc-c377bf830026'}})  2026-01-02 00:42:37.546879 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'aabdb1ab-3cea-5cae-90fa-5f0cfaabc1a0'}})  2026-01-02 00:42:37.546890 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:42:37.546901 | orchestrator | 2026-01-02 00:42:37.546912 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-01-02 00:42:37.546923 | orchestrator | Friday 02 January 2026 00:42:33 +0000 (0:00:00.203) 0:00:22.968 ******** 2026-01-02 00:42:37.546934 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:42:37.546945 | orchestrator | 2026-01-02 00:42:37.546955 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-01-02 00:42:37.546966 | orchestrator | Friday 02 January 2026 00:42:34 +0000 (0:00:00.137) 0:00:23.106 ******** 2026-01-02 00:42:37.546977 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:42:37.546988 | orchestrator | 2026-01-02 00:42:37.546999 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-01-02 00:42:37.547009 | orchestrator | Friday 02 January 2026 00:42:34 +0000 (0:00:00.148) 0:00:23.255 ******** 2026-01-02 00:42:37.547041 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:42:37.547052 | orchestrator | 2026-01-02 00:42:37.547063 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-01-02 00:42:37.547074 | orchestrator | Friday 02 January 2026 00:42:34 +0000 (0:00:00.349) 0:00:23.605 ******** 2026-01-02 00:42:37.547085 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:42:37.547096 | orchestrator | 2026-01-02 00:42:37.547132 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-01-02 00:42:37.547145 | orchestrator | Friday 02 January 2026 00:42:34 +0000 (0:00:00.137) 0:00:23.742 ******** 2026-01-02 00:42:37.547155 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:42:37.547166 | orchestrator | 2026-01-02 00:42:37.547177 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-01-02 00:42:37.547188 | orchestrator | Friday 02 January 2026 00:42:34 +0000 (0:00:00.134) 0:00:23.877 ******** 2026-01-02 00:42:37.547199 | orchestrator | ok: [testbed-node-4] => { 2026-01-02 00:42:37.547210 | orchestrator |  "ceph_osd_devices": { 2026-01-02 00:42:37.547221 | orchestrator |  "sdb": { 2026-01-02 00:42:37.547232 | orchestrator |  "osd_lvm_uuid": "319da19b-b53c-570d-92cc-c377bf830026" 2026-01-02 00:42:37.547252 | orchestrator |  }, 2026-01-02 00:42:37.547263 | orchestrator |  "sdc": { 2026-01-02 00:42:37.547274 | orchestrator |  "osd_lvm_uuid": "aabdb1ab-3cea-5cae-90fa-5f0cfaabc1a0" 2026-01-02 00:42:37.547285 | orchestrator |  } 2026-01-02 00:42:37.547296 | orchestrator |  } 2026-01-02 00:42:37.547307 | orchestrator | } 2026-01-02 00:42:37.547318 | orchestrator | 2026-01-02 00:42:37.547329 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-01-02 00:42:37.547340 | orchestrator | Friday 02 January 2026 00:42:34 +0000 (0:00:00.133) 0:00:24.011 ******** 2026-01-02 00:42:37.547350 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:42:37.547361 | orchestrator | 2026-01-02 00:42:37.547372 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-01-02 00:42:37.547383 | orchestrator | Friday 02 January 2026 00:42:35 +0000 (0:00:00.129) 0:00:24.140 ******** 2026-01-02 00:42:37.547394 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:42:37.547405 | orchestrator | 2026-01-02 00:42:37.547415 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-01-02 00:42:37.547426 | orchestrator | Friday 02 January 2026 00:42:35 +0000 (0:00:00.129) 0:00:24.270 ******** 2026-01-02 00:42:37.547437 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:42:37.547448 | orchestrator | 2026-01-02 00:42:37.547459 | orchestrator | TASK [Print configuration data] ************************************************ 2026-01-02 00:42:37.547470 | orchestrator | Friday 02 January 2026 00:42:35 +0000 (0:00:00.131) 0:00:24.401 ******** 2026-01-02 00:42:37.547481 | orchestrator | changed: [testbed-node-4] => { 2026-01-02 00:42:37.547491 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-01-02 00:42:37.547503 | orchestrator |  "ceph_osd_devices": { 2026-01-02 00:42:37.547565 | orchestrator |  "sdb": { 2026-01-02 00:42:37.547578 | orchestrator |  "osd_lvm_uuid": "319da19b-b53c-570d-92cc-c377bf830026" 2026-01-02 00:42:37.547589 | orchestrator |  }, 2026-01-02 00:42:37.547600 | orchestrator |  "sdc": { 2026-01-02 00:42:37.547611 | orchestrator |  "osd_lvm_uuid": "aabdb1ab-3cea-5cae-90fa-5f0cfaabc1a0" 2026-01-02 00:42:37.547622 | orchestrator |  } 2026-01-02 00:42:37.547633 | orchestrator |  }, 2026-01-02 00:42:37.547644 | orchestrator |  "lvm_volumes": [ 2026-01-02 00:42:37.547654 | orchestrator |  { 2026-01-02 00:42:37.547665 | orchestrator |  "data": "osd-block-319da19b-b53c-570d-92cc-c377bf830026", 2026-01-02 00:42:37.547676 | orchestrator |  "data_vg": "ceph-319da19b-b53c-570d-92cc-c377bf830026" 2026-01-02 00:42:37.547687 | orchestrator |  }, 2026-01-02 00:42:37.547697 | orchestrator |  { 2026-01-02 00:42:37.547708 | orchestrator |  "data": "osd-block-aabdb1ab-3cea-5cae-90fa-5f0cfaabc1a0", 2026-01-02 00:42:37.547719 | orchestrator |  "data_vg": "ceph-aabdb1ab-3cea-5cae-90fa-5f0cfaabc1a0" 2026-01-02 00:42:37.547730 | orchestrator |  } 2026-01-02 00:42:37.547740 | orchestrator |  ] 2026-01-02 00:42:37.547751 | orchestrator |  } 2026-01-02 00:42:37.547762 | orchestrator | } 2026-01-02 00:42:37.547773 | orchestrator | 2026-01-02 00:42:37.547784 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-01-02 00:42:37.547795 | orchestrator | Friday 02 January 2026 00:42:35 +0000 (0:00:00.208) 0:00:24.609 ******** 2026-01-02 00:42:37.547806 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-01-02 00:42:37.547816 | orchestrator | 2026-01-02 00:42:37.547827 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-01-02 00:42:37.547838 | orchestrator | 2026-01-02 00:42:37.547849 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-01-02 00:42:37.547860 | orchestrator | Friday 02 January 2026 00:42:36 +0000 (0:00:00.982) 0:00:25.592 ******** 2026-01-02 00:42:37.547870 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-01-02 00:42:37.547881 | orchestrator | 2026-01-02 00:42:37.547892 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-01-02 00:42:37.547926 | orchestrator | Friday 02 January 2026 00:42:36 +0000 (0:00:00.484) 0:00:26.076 ******** 2026-01-02 00:42:37.547938 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:42:37.547949 | orchestrator | 2026-01-02 00:42:37.547959 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-02 00:42:37.547970 | orchestrator | Friday 02 January 2026 00:42:37 +0000 (0:00:00.199) 0:00:26.275 ******** 2026-01-02 00:42:37.547981 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-01-02 00:42:37.547992 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-01-02 00:42:37.548003 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-01-02 00:42:37.548014 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-01-02 00:42:37.548025 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-01-02 00:42:37.548044 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-01-02 00:42:44.175483 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-01-02 00:42:44.175597 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-01-02 00:42:44.175620 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-01-02 00:42:44.175637 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-01-02 00:42:44.175655 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-01-02 00:42:44.175675 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-01-02 00:42:44.175692 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-01-02 00:42:44.175711 | orchestrator | 2026-01-02 00:42:44.175730 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-02 00:42:44.175748 | orchestrator | Friday 02 January 2026 00:42:37 +0000 (0:00:00.345) 0:00:26.621 ******** 2026-01-02 00:42:44.175759 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:42:44.175770 | orchestrator | 2026-01-02 00:42:44.175779 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-02 00:42:44.175790 | orchestrator | Friday 02 January 2026 00:42:37 +0000 (0:00:00.185) 0:00:26.806 ******** 2026-01-02 00:42:44.175799 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:42:44.175809 | orchestrator | 2026-01-02 00:42:44.175818 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-02 00:42:44.175828 | orchestrator | Friday 02 January 2026 00:42:37 +0000 (0:00:00.178) 0:00:26.985 ******** 2026-01-02 00:42:44.175838 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:42:44.175847 | orchestrator | 2026-01-02 00:42:44.175857 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-02 00:42:44.175866 | orchestrator | Friday 02 January 2026 00:42:38 +0000 (0:00:00.179) 0:00:27.164 ******** 2026-01-02 00:42:44.175876 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:42:44.175888 | orchestrator | 2026-01-02 00:42:44.175904 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-02 00:42:44.175920 | orchestrator | Friday 02 January 2026 00:42:38 +0000 (0:00:00.183) 0:00:27.348 ******** 2026-01-02 00:42:44.175938 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:42:44.175954 | orchestrator | 2026-01-02 00:42:44.175970 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-02 00:42:44.175986 | orchestrator | Friday 02 January 2026 00:42:38 +0000 (0:00:00.168) 0:00:27.516 ******** 2026-01-02 00:42:44.176003 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:42:44.176018 | orchestrator | 2026-01-02 00:42:44.176035 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-02 00:42:44.176086 | orchestrator | Friday 02 January 2026 00:42:38 +0000 (0:00:00.189) 0:00:27.706 ******** 2026-01-02 00:42:44.176142 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:42:44.176161 | orchestrator | 2026-01-02 00:42:44.176178 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-02 00:42:44.176194 | orchestrator | Friday 02 January 2026 00:42:38 +0000 (0:00:00.183) 0:00:27.890 ******** 2026-01-02 00:42:44.176211 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:42:44.176228 | orchestrator | 2026-01-02 00:42:44.176245 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-02 00:42:44.176265 | orchestrator | Friday 02 January 2026 00:42:39 +0000 (0:00:00.230) 0:00:28.121 ******** 2026-01-02 00:42:44.176281 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_551f95f5-89d5-4c4c-89d2-f5559c6efa3b) 2026-01-02 00:42:44.176300 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_551f95f5-89d5-4c4c-89d2-f5559c6efa3b) 2026-01-02 00:42:44.176317 | orchestrator | 2026-01-02 00:42:44.176333 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-02 00:42:44.176350 | orchestrator | Friday 02 January 2026 00:42:39 +0000 (0:00:00.543) 0:00:28.664 ******** 2026-01-02 00:42:44.176367 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_26e4f97c-d63e-4b12-851b-95c853c7feee) 2026-01-02 00:42:44.176384 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_26e4f97c-d63e-4b12-851b-95c853c7feee) 2026-01-02 00:42:44.176400 | orchestrator | 2026-01-02 00:42:44.176416 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-02 00:42:44.176433 | orchestrator | Friday 02 January 2026 00:42:39 +0000 (0:00:00.321) 0:00:28.986 ******** 2026-01-02 00:42:44.176448 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_afdcae1f-177b-4712-b40b-94f97a828de8) 2026-01-02 00:42:44.176465 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_afdcae1f-177b-4712-b40b-94f97a828de8) 2026-01-02 00:42:44.176482 | orchestrator | 2026-01-02 00:42:44.176499 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-02 00:42:44.176515 | orchestrator | Friday 02 January 2026 00:42:40 +0000 (0:00:00.333) 0:00:29.319 ******** 2026-01-02 00:42:44.176531 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_ec88668e-8f3f-41b2-9ae8-d99cc1bb8c9e) 2026-01-02 00:42:44.176547 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_ec88668e-8f3f-41b2-9ae8-d99cc1bb8c9e) 2026-01-02 00:42:44.176562 | orchestrator | 2026-01-02 00:42:44.176579 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-02 00:42:44.176595 | orchestrator | Friday 02 January 2026 00:42:40 +0000 (0:00:00.333) 0:00:29.653 ******** 2026-01-02 00:42:44.176610 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-01-02 00:42:44.176625 | orchestrator | 2026-01-02 00:42:44.176641 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-02 00:42:44.176682 | orchestrator | Friday 02 January 2026 00:42:40 +0000 (0:00:00.250) 0:00:29.904 ******** 2026-01-02 00:42:44.176699 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-01-02 00:42:44.176716 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-01-02 00:42:44.176731 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-01-02 00:42:44.176747 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-01-02 00:42:44.176764 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-01-02 00:42:44.176801 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-01-02 00:42:44.176817 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-01-02 00:42:44.176833 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-01-02 00:42:44.176864 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-01-02 00:42:44.176879 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-01-02 00:42:44.176894 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-01-02 00:42:44.176910 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-01-02 00:42:44.176926 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-01-02 00:42:44.176943 | orchestrator | 2026-01-02 00:42:44.176958 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-02 00:42:44.176975 | orchestrator | Friday 02 January 2026 00:42:41 +0000 (0:00:00.280) 0:00:30.184 ******** 2026-01-02 00:42:44.176992 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:42:44.177009 | orchestrator | 2026-01-02 00:42:44.177025 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-02 00:42:44.177040 | orchestrator | Friday 02 January 2026 00:42:41 +0000 (0:00:00.151) 0:00:30.336 ******** 2026-01-02 00:42:44.177056 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:42:44.177073 | orchestrator | 2026-01-02 00:42:44.177089 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-02 00:42:44.177143 | orchestrator | Friday 02 January 2026 00:42:41 +0000 (0:00:00.183) 0:00:30.519 ******** 2026-01-02 00:42:44.177162 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:42:44.177181 | orchestrator | 2026-01-02 00:42:44.177197 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-02 00:42:44.177212 | orchestrator | Friday 02 January 2026 00:42:41 +0000 (0:00:00.194) 0:00:30.714 ******** 2026-01-02 00:42:44.177229 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:42:44.177245 | orchestrator | 2026-01-02 00:42:44.177262 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-02 00:42:44.177278 | orchestrator | Friday 02 January 2026 00:42:41 +0000 (0:00:00.182) 0:00:30.896 ******** 2026-01-02 00:42:44.177294 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:42:44.177310 | orchestrator | 2026-01-02 00:42:44.177326 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-02 00:42:44.177341 | orchestrator | Friday 02 January 2026 00:42:42 +0000 (0:00:00.205) 0:00:31.102 ******** 2026-01-02 00:42:44.177357 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:42:44.177373 | orchestrator | 2026-01-02 00:42:44.177389 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-02 00:42:44.177406 | orchestrator | Friday 02 January 2026 00:42:42 +0000 (0:00:00.455) 0:00:31.557 ******** 2026-01-02 00:42:44.177422 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:42:44.177439 | orchestrator | 2026-01-02 00:42:44.177455 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-02 00:42:44.177470 | orchestrator | Friday 02 January 2026 00:42:42 +0000 (0:00:00.189) 0:00:31.747 ******** 2026-01-02 00:42:44.177487 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:42:44.177504 | orchestrator | 2026-01-02 00:42:44.177521 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-02 00:42:44.177538 | orchestrator | Friday 02 January 2026 00:42:42 +0000 (0:00:00.184) 0:00:31.932 ******** 2026-01-02 00:42:44.177554 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-01-02 00:42:44.177571 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-01-02 00:42:44.177588 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-01-02 00:42:44.177605 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-01-02 00:42:44.177621 | orchestrator | 2026-01-02 00:42:44.177638 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-02 00:42:44.177655 | orchestrator | Friday 02 January 2026 00:42:43 +0000 (0:00:00.611) 0:00:32.544 ******** 2026-01-02 00:42:44.177673 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:42:44.177721 | orchestrator | 2026-01-02 00:42:44.177739 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-02 00:42:44.177757 | orchestrator | Friday 02 January 2026 00:42:43 +0000 (0:00:00.156) 0:00:32.700 ******** 2026-01-02 00:42:44.177774 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:42:44.177791 | orchestrator | 2026-01-02 00:42:44.177807 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-02 00:42:44.177824 | orchestrator | Friday 02 January 2026 00:42:43 +0000 (0:00:00.191) 0:00:32.892 ******** 2026-01-02 00:42:44.177839 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:42:44.177856 | orchestrator | 2026-01-02 00:42:44.177871 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-02 00:42:44.177889 | orchestrator | Friday 02 January 2026 00:42:43 +0000 (0:00:00.187) 0:00:33.079 ******** 2026-01-02 00:42:44.177907 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:42:44.177924 | orchestrator | 2026-01-02 00:42:44.177958 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-01-02 00:42:47.797219 | orchestrator | Friday 02 January 2026 00:42:44 +0000 (0:00:00.170) 0:00:33.250 ******** 2026-01-02 00:42:47.798305 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2026-01-02 00:42:47.798387 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2026-01-02 00:42:47.798402 | orchestrator | 2026-01-02 00:42:47.798416 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-01-02 00:42:47.798428 | orchestrator | Friday 02 January 2026 00:42:44 +0000 (0:00:00.153) 0:00:33.403 ******** 2026-01-02 00:42:47.798439 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:42:47.798451 | orchestrator | 2026-01-02 00:42:47.798462 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-01-02 00:42:47.798473 | orchestrator | Friday 02 January 2026 00:42:44 +0000 (0:00:00.114) 0:00:33.518 ******** 2026-01-02 00:42:47.798484 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:42:47.798495 | orchestrator | 2026-01-02 00:42:47.798506 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-01-02 00:42:47.798517 | orchestrator | Friday 02 January 2026 00:42:44 +0000 (0:00:00.125) 0:00:33.643 ******** 2026-01-02 00:42:47.798528 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:42:47.798539 | orchestrator | 2026-01-02 00:42:47.798550 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-01-02 00:42:47.798561 | orchestrator | Friday 02 January 2026 00:42:44 +0000 (0:00:00.230) 0:00:33.874 ******** 2026-01-02 00:42:47.798572 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:42:47.798584 | orchestrator | 2026-01-02 00:42:47.798596 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-01-02 00:42:47.798607 | orchestrator | Friday 02 January 2026 00:42:44 +0000 (0:00:00.138) 0:00:34.012 ******** 2026-01-02 00:42:47.798619 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '804dd052-7dd8-5ffa-9f76-70ebd20e36f7'}}) 2026-01-02 00:42:47.798631 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '8699efe3-2ea7-5359-bcef-4eac218b02a9'}}) 2026-01-02 00:42:47.798642 | orchestrator | 2026-01-02 00:42:47.798653 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-01-02 00:42:47.798664 | orchestrator | Friday 02 January 2026 00:42:45 +0000 (0:00:00.141) 0:00:34.153 ******** 2026-01-02 00:42:47.798675 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '804dd052-7dd8-5ffa-9f76-70ebd20e36f7'}})  2026-01-02 00:42:47.798688 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '8699efe3-2ea7-5359-bcef-4eac218b02a9'}})  2026-01-02 00:42:47.798699 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:42:47.798710 | orchestrator | 2026-01-02 00:42:47.798721 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-01-02 00:42:47.798732 | orchestrator | Friday 02 January 2026 00:42:45 +0000 (0:00:00.127) 0:00:34.280 ******** 2026-01-02 00:42:47.798776 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '804dd052-7dd8-5ffa-9f76-70ebd20e36f7'}})  2026-01-02 00:42:47.798788 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '8699efe3-2ea7-5359-bcef-4eac218b02a9'}})  2026-01-02 00:42:47.798799 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:42:47.798810 | orchestrator | 2026-01-02 00:42:47.798821 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-01-02 00:42:47.798832 | orchestrator | Friday 02 January 2026 00:42:45 +0000 (0:00:00.152) 0:00:34.433 ******** 2026-01-02 00:42:47.798869 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '804dd052-7dd8-5ffa-9f76-70ebd20e36f7'}})  2026-01-02 00:42:47.798882 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '8699efe3-2ea7-5359-bcef-4eac218b02a9'}})  2026-01-02 00:42:47.798893 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:42:47.798904 | orchestrator | 2026-01-02 00:42:47.798915 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-01-02 00:42:47.798926 | orchestrator | Friday 02 January 2026 00:42:45 +0000 (0:00:00.142) 0:00:34.576 ******** 2026-01-02 00:42:47.798936 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:42:47.798947 | orchestrator | 2026-01-02 00:42:47.798958 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-01-02 00:42:47.798969 | orchestrator | Friday 02 January 2026 00:42:45 +0000 (0:00:00.125) 0:00:34.701 ******** 2026-01-02 00:42:47.798980 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:42:47.798991 | orchestrator | 2026-01-02 00:42:47.799001 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-01-02 00:42:47.799012 | orchestrator | Friday 02 January 2026 00:42:45 +0000 (0:00:00.112) 0:00:34.814 ******** 2026-01-02 00:42:47.799023 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:42:47.799034 | orchestrator | 2026-01-02 00:42:47.799045 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-01-02 00:42:47.799055 | orchestrator | Friday 02 January 2026 00:42:45 +0000 (0:00:00.118) 0:00:34.932 ******** 2026-01-02 00:42:47.799066 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:42:47.799077 | orchestrator | 2026-01-02 00:42:47.799088 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-01-02 00:42:47.799099 | orchestrator | Friday 02 January 2026 00:42:45 +0000 (0:00:00.110) 0:00:35.042 ******** 2026-01-02 00:42:47.799135 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:42:47.799146 | orchestrator | 2026-01-02 00:42:47.799157 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-01-02 00:42:47.799167 | orchestrator | Friday 02 January 2026 00:42:46 +0000 (0:00:00.114) 0:00:35.157 ******** 2026-01-02 00:42:47.799178 | orchestrator | ok: [testbed-node-5] => { 2026-01-02 00:42:47.799190 | orchestrator |  "ceph_osd_devices": { 2026-01-02 00:42:47.799201 | orchestrator |  "sdb": { 2026-01-02 00:42:47.799238 | orchestrator |  "osd_lvm_uuid": "804dd052-7dd8-5ffa-9f76-70ebd20e36f7" 2026-01-02 00:42:47.799250 | orchestrator |  }, 2026-01-02 00:42:47.799262 | orchestrator |  "sdc": { 2026-01-02 00:42:47.799273 | orchestrator |  "osd_lvm_uuid": "8699efe3-2ea7-5359-bcef-4eac218b02a9" 2026-01-02 00:42:47.799284 | orchestrator |  } 2026-01-02 00:42:47.799295 | orchestrator |  } 2026-01-02 00:42:47.799306 | orchestrator | } 2026-01-02 00:42:47.799317 | orchestrator | 2026-01-02 00:42:47.799328 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-01-02 00:42:47.799339 | orchestrator | Friday 02 January 2026 00:42:46 +0000 (0:00:00.116) 0:00:35.273 ******** 2026-01-02 00:42:47.799350 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:42:47.799361 | orchestrator | 2026-01-02 00:42:47.799372 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-01-02 00:42:47.799384 | orchestrator | Friday 02 January 2026 00:42:46 +0000 (0:00:00.114) 0:00:35.388 ******** 2026-01-02 00:42:47.799405 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:42:47.799416 | orchestrator | 2026-01-02 00:42:47.799427 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-01-02 00:42:47.799438 | orchestrator | Friday 02 January 2026 00:42:46 +0000 (0:00:00.230) 0:00:35.618 ******** 2026-01-02 00:42:47.799449 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:42:47.799460 | orchestrator | 2026-01-02 00:42:47.799471 | orchestrator | TASK [Print configuration data] ************************************************ 2026-01-02 00:42:47.799482 | orchestrator | Friday 02 January 2026 00:42:46 +0000 (0:00:00.112) 0:00:35.730 ******** 2026-01-02 00:42:47.799493 | orchestrator | changed: [testbed-node-5] => { 2026-01-02 00:42:47.799504 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-01-02 00:42:47.799515 | orchestrator |  "ceph_osd_devices": { 2026-01-02 00:42:47.799526 | orchestrator |  "sdb": { 2026-01-02 00:42:47.799537 | orchestrator |  "osd_lvm_uuid": "804dd052-7dd8-5ffa-9f76-70ebd20e36f7" 2026-01-02 00:42:47.799548 | orchestrator |  }, 2026-01-02 00:42:47.799560 | orchestrator |  "sdc": { 2026-01-02 00:42:47.799571 | orchestrator |  "osd_lvm_uuid": "8699efe3-2ea7-5359-bcef-4eac218b02a9" 2026-01-02 00:42:47.799582 | orchestrator |  } 2026-01-02 00:42:47.799593 | orchestrator |  }, 2026-01-02 00:42:47.799604 | orchestrator |  "lvm_volumes": [ 2026-01-02 00:42:47.799615 | orchestrator |  { 2026-01-02 00:42:47.799626 | orchestrator |  "data": "osd-block-804dd052-7dd8-5ffa-9f76-70ebd20e36f7", 2026-01-02 00:42:47.799637 | orchestrator |  "data_vg": "ceph-804dd052-7dd8-5ffa-9f76-70ebd20e36f7" 2026-01-02 00:42:47.799648 | orchestrator |  }, 2026-01-02 00:42:47.799659 | orchestrator |  { 2026-01-02 00:42:47.799670 | orchestrator |  "data": "osd-block-8699efe3-2ea7-5359-bcef-4eac218b02a9", 2026-01-02 00:42:47.799682 | orchestrator |  "data_vg": "ceph-8699efe3-2ea7-5359-bcef-4eac218b02a9" 2026-01-02 00:42:47.799693 | orchestrator |  } 2026-01-02 00:42:47.799709 | orchestrator |  ] 2026-01-02 00:42:47.799720 | orchestrator |  } 2026-01-02 00:42:47.799731 | orchestrator | } 2026-01-02 00:42:47.799742 | orchestrator | 2026-01-02 00:42:47.799753 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-01-02 00:42:47.799764 | orchestrator | Friday 02 January 2026 00:42:46 +0000 (0:00:00.189) 0:00:35.920 ******** 2026-01-02 00:42:47.799775 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-01-02 00:42:47.799786 | orchestrator | 2026-01-02 00:42:47.799798 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-02 00:42:47.799809 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-01-02 00:42:47.799821 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-01-02 00:42:47.799833 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-01-02 00:42:47.799844 | orchestrator | 2026-01-02 00:42:47.799855 | orchestrator | 2026-01-02 00:42:47.799866 | orchestrator | 2026-01-02 00:42:47.799877 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-02 00:42:47.799888 | orchestrator | Friday 02 January 2026 00:42:47 +0000 (0:00:00.936) 0:00:36.857 ******** 2026-01-02 00:42:47.799899 | orchestrator | =============================================================================== 2026-01-02 00:42:47.799910 | orchestrator | Write configuration file ------------------------------------------------ 3.62s 2026-01-02 00:42:47.799921 | orchestrator | Add known links to the list of available block devices ------------------ 1.10s 2026-01-02 00:42:47.799932 | orchestrator | Add known partitions to the list of available block devices ------------- 0.98s 2026-01-02 00:42:47.799943 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.93s 2026-01-02 00:42:47.799961 | orchestrator | Add known partitions to the list of available block devices ------------- 0.78s 2026-01-02 00:42:47.799972 | orchestrator | Add known partitions to the list of available block devices ------------- 0.77s 2026-01-02 00:42:47.799983 | orchestrator | Print configuration data ------------------------------------------------ 0.76s 2026-01-02 00:42:47.799994 | orchestrator | Add known partitions to the list of available block devices ------------- 0.64s 2026-01-02 00:42:47.800005 | orchestrator | Add known links to the list of available block devices ------------------ 0.62s 2026-01-02 00:42:47.800016 | orchestrator | Get initial list of available block devices ----------------------------- 0.62s 2026-01-02 00:42:47.800027 | orchestrator | Generate lvm_volumes structure (block + wal) ---------------------------- 0.61s 2026-01-02 00:42:47.800038 | orchestrator | Add known partitions to the list of available block devices ------------- 0.61s 2026-01-02 00:42:47.800049 | orchestrator | Set DB devices config data ---------------------------------------------- 0.60s 2026-01-02 00:42:47.800068 | orchestrator | Add known partitions to the list of available block devices ------------- 0.57s 2026-01-02 00:42:48.168005 | orchestrator | Add known links to the list of available block devices ------------------ 0.54s 2026-01-02 00:42:48.168165 | orchestrator | Generate shared DB/WAL VG names ----------------------------------------- 0.51s 2026-01-02 00:42:48.168181 | orchestrator | Add known links to the list of available block devices ------------------ 0.51s 2026-01-02 00:42:48.168189 | orchestrator | Generate lvm_volumes structure (block + db + wal) ----------------------- 0.51s 2026-01-02 00:42:48.168480 | orchestrator | Add known links to the list of available block devices ------------------ 0.50s 2026-01-02 00:42:48.168491 | orchestrator | Set UUIDs for OSD VGs/LVs ----------------------------------------------- 0.50s 2026-01-02 00:43:10.729652 | orchestrator | 2026-01-02 00:43:10 | INFO  | Task 34d11ab3-79c0-488f-a761-4afd85bfde36 (sync inventory) is running in background. Output coming soon. 2026-01-02 00:43:35.253014 | orchestrator | 2026-01-02 00:43:12 | INFO  | Starting group_vars file reorganization 2026-01-02 00:43:35.253180 | orchestrator | 2026-01-02 00:43:12 | INFO  | Moved 0 file(s) to their respective directories 2026-01-02 00:43:35.253200 | orchestrator | 2026-01-02 00:43:12 | INFO  | Group_vars file reorganization completed 2026-01-02 00:43:35.253212 | orchestrator | 2026-01-02 00:43:14 | INFO  | Starting variable preparation from inventory 2026-01-02 00:43:35.253224 | orchestrator | 2026-01-02 00:43:17 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2026-01-02 00:43:35.253236 | orchestrator | 2026-01-02 00:43:17 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2026-01-02 00:43:35.253269 | orchestrator | 2026-01-02 00:43:17 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2026-01-02 00:43:35.253281 | orchestrator | 2026-01-02 00:43:17 | INFO  | 3 file(s) written, 6 host(s) processed 2026-01-02 00:43:35.253293 | orchestrator | 2026-01-02 00:43:17 | INFO  | Variable preparation completed 2026-01-02 00:43:35.253305 | orchestrator | 2026-01-02 00:43:18 | INFO  | Starting inventory overwrite handling 2026-01-02 00:43:35.253321 | orchestrator | 2026-01-02 00:43:18 | INFO  | Handling group overwrites in 99-overwrite 2026-01-02 00:43:35.253332 | orchestrator | 2026-01-02 00:43:18 | INFO  | Removing group frr:children from 60-generic 2026-01-02 00:43:35.253343 | orchestrator | 2026-01-02 00:43:18 | INFO  | Removing group netbird:children from 50-infrastructure 2026-01-02 00:43:35.253355 | orchestrator | 2026-01-02 00:43:18 | INFO  | Removing group ceph-mds from 50-ceph 2026-01-02 00:43:35.253366 | orchestrator | 2026-01-02 00:43:18 | INFO  | Removing group ceph-rgw from 50-ceph 2026-01-02 00:43:35.253377 | orchestrator | 2026-01-02 00:43:18 | INFO  | Handling group overwrites in 20-roles 2026-01-02 00:43:35.253413 | orchestrator | 2026-01-02 00:43:18 | INFO  | Removing group k3s_node from 50-infrastructure 2026-01-02 00:43:35.253425 | orchestrator | 2026-01-02 00:43:18 | INFO  | Removed 5 group(s) in total 2026-01-02 00:43:35.253436 | orchestrator | 2026-01-02 00:43:18 | INFO  | Inventory overwrite handling completed 2026-01-02 00:43:35.253447 | orchestrator | 2026-01-02 00:43:19 | INFO  | Starting merge of inventory files 2026-01-02 00:43:35.253458 | orchestrator | 2026-01-02 00:43:19 | INFO  | Inventory files merged successfully 2026-01-02 00:43:35.253469 | orchestrator | 2026-01-02 00:43:23 | INFO  | Generating ClusterShell configuration from Ansible inventory 2026-01-02 00:43:35.253481 | orchestrator | 2026-01-02 00:43:34 | INFO  | Successfully wrote ClusterShell configuration 2026-01-02 00:43:35.253492 | orchestrator | [master 59893c1] 2026-01-02-00-43 2026-01-02 00:43:35.253505 | orchestrator | 1 file changed, 30 insertions(+), 9 deletions(-) 2026-01-02 00:43:36.972265 | orchestrator | 2026-01-02 00:43:36 | INFO  | Task 8b3354ce-d1e7-4e2d-8447-8720311ea285 (ceph-create-lvm-devices) was prepared for execution. 2026-01-02 00:43:36.972391 | orchestrator | 2026-01-02 00:43:36 | INFO  | It takes a moment until task 8b3354ce-d1e7-4e2d-8447-8720311ea285 (ceph-create-lvm-devices) has been started and output is visible here. 2026-01-02 00:43:46.639920 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-01-02 00:43:46.640035 | orchestrator | 2.16.14 2026-01-02 00:43:46.640053 | orchestrator | 2026-01-02 00:43:46.640066 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-01-02 00:43:46.640119 | orchestrator | 2026-01-02 00:43:46.640140 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-01-02 00:43:46.640159 | orchestrator | Friday 02 January 2026 00:43:40 +0000 (0:00:00.268) 0:00:00.268 ******** 2026-01-02 00:43:46.640173 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-01-02 00:43:46.640184 | orchestrator | 2026-01-02 00:43:46.640196 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-01-02 00:43:46.640207 | orchestrator | Friday 02 January 2026 00:43:40 +0000 (0:00:00.212) 0:00:00.481 ******** 2026-01-02 00:43:46.640218 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:43:46.640230 | orchestrator | 2026-01-02 00:43:46.640242 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-02 00:43:46.640254 | orchestrator | Friday 02 January 2026 00:43:40 +0000 (0:00:00.208) 0:00:00.689 ******** 2026-01-02 00:43:46.640265 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-01-02 00:43:46.640276 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-01-02 00:43:46.640287 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-01-02 00:43:46.640298 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-01-02 00:43:46.640309 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-01-02 00:43:46.640320 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-01-02 00:43:46.640331 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-01-02 00:43:46.640341 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-01-02 00:43:46.640352 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-01-02 00:43:46.640363 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-01-02 00:43:46.640374 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-01-02 00:43:46.640385 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-01-02 00:43:46.640424 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-01-02 00:43:46.640436 | orchestrator | 2026-01-02 00:43:46.640449 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-02 00:43:46.640461 | orchestrator | Friday 02 January 2026 00:43:41 +0000 (0:00:00.423) 0:00:01.113 ******** 2026-01-02 00:43:46.640474 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:43:46.640487 | orchestrator | 2026-01-02 00:43:46.640500 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-02 00:43:46.640513 | orchestrator | Friday 02 January 2026 00:43:41 +0000 (0:00:00.176) 0:00:01.290 ******** 2026-01-02 00:43:46.640525 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:43:46.640537 | orchestrator | 2026-01-02 00:43:46.640550 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-02 00:43:46.640562 | orchestrator | Friday 02 January 2026 00:43:41 +0000 (0:00:00.172) 0:00:01.462 ******** 2026-01-02 00:43:46.640575 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:43:46.640587 | orchestrator | 2026-01-02 00:43:46.640600 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-02 00:43:46.640613 | orchestrator | Friday 02 January 2026 00:43:41 +0000 (0:00:00.180) 0:00:01.642 ******** 2026-01-02 00:43:46.640625 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:43:46.640637 | orchestrator | 2026-01-02 00:43:46.640650 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-02 00:43:46.640662 | orchestrator | Friday 02 January 2026 00:43:42 +0000 (0:00:00.166) 0:00:01.809 ******** 2026-01-02 00:43:46.640675 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:43:46.640687 | orchestrator | 2026-01-02 00:43:46.640700 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-02 00:43:46.640713 | orchestrator | Friday 02 January 2026 00:43:42 +0000 (0:00:00.169) 0:00:01.979 ******** 2026-01-02 00:43:46.640727 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:43:46.640739 | orchestrator | 2026-01-02 00:43:46.640752 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-02 00:43:46.640764 | orchestrator | Friday 02 January 2026 00:43:42 +0000 (0:00:00.167) 0:00:02.146 ******** 2026-01-02 00:43:46.640776 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:43:46.640789 | orchestrator | 2026-01-02 00:43:46.640802 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-02 00:43:46.640813 | orchestrator | Friday 02 January 2026 00:43:42 +0000 (0:00:00.168) 0:00:02.315 ******** 2026-01-02 00:43:46.640823 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:43:46.640834 | orchestrator | 2026-01-02 00:43:46.640845 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-02 00:43:46.640856 | orchestrator | Friday 02 January 2026 00:43:42 +0000 (0:00:00.169) 0:00:02.484 ******** 2026-01-02 00:43:46.640867 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_305350a8-4399-44d3-b2ea-bbf7b9eb7f90) 2026-01-02 00:43:46.640879 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_305350a8-4399-44d3-b2ea-bbf7b9eb7f90) 2026-01-02 00:43:46.640890 | orchestrator | 2026-01-02 00:43:46.640902 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-02 00:43:46.640932 | orchestrator | Friday 02 January 2026 00:43:43 +0000 (0:00:00.350) 0:00:02.835 ******** 2026-01-02 00:43:46.640944 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_610525bf-123e-48f5-8f72-a088231f73d4) 2026-01-02 00:43:46.640955 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_610525bf-123e-48f5-8f72-a088231f73d4) 2026-01-02 00:43:46.640965 | orchestrator | 2026-01-02 00:43:46.640976 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-02 00:43:46.640987 | orchestrator | Friday 02 January 2026 00:43:43 +0000 (0:00:00.524) 0:00:03.359 ******** 2026-01-02 00:43:46.640998 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_d0e027c6-7483-4a58-a550-b5020c348e91) 2026-01-02 00:43:46.641019 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_d0e027c6-7483-4a58-a550-b5020c348e91) 2026-01-02 00:43:46.641031 | orchestrator | 2026-01-02 00:43:46.641041 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-02 00:43:46.641052 | orchestrator | Friday 02 January 2026 00:43:44 +0000 (0:00:00.497) 0:00:03.857 ******** 2026-01-02 00:43:46.641063 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_88e6ca38-e9bc-414f-be79-2564fe6ee507) 2026-01-02 00:43:46.641134 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_88e6ca38-e9bc-414f-be79-2564fe6ee507) 2026-01-02 00:43:46.641147 | orchestrator | 2026-01-02 00:43:46.641159 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-02 00:43:46.641169 | orchestrator | Friday 02 January 2026 00:43:44 +0000 (0:00:00.671) 0:00:04.529 ******** 2026-01-02 00:43:46.641181 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-01-02 00:43:46.641192 | orchestrator | 2026-01-02 00:43:46.641203 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-02 00:43:46.641213 | orchestrator | Friday 02 January 2026 00:43:45 +0000 (0:00:00.287) 0:00:04.816 ******** 2026-01-02 00:43:46.641224 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-01-02 00:43:46.641235 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-01-02 00:43:46.641245 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-01-02 00:43:46.641276 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-01-02 00:43:46.641287 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-01-02 00:43:46.641298 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-01-02 00:43:46.641309 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-01-02 00:43:46.641320 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-01-02 00:43:46.641331 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-01-02 00:43:46.641342 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-01-02 00:43:46.641352 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-01-02 00:43:46.641368 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-01-02 00:43:46.641380 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-01-02 00:43:46.641391 | orchestrator | 2026-01-02 00:43:46.641402 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-02 00:43:46.641413 | orchestrator | Friday 02 January 2026 00:43:45 +0000 (0:00:00.331) 0:00:05.148 ******** 2026-01-02 00:43:46.641424 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:43:46.641435 | orchestrator | 2026-01-02 00:43:46.641445 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-02 00:43:46.641456 | orchestrator | Friday 02 January 2026 00:43:45 +0000 (0:00:00.154) 0:00:05.302 ******** 2026-01-02 00:43:46.641467 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:43:46.641477 | orchestrator | 2026-01-02 00:43:46.641488 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-02 00:43:46.641499 | orchestrator | Friday 02 January 2026 00:43:45 +0000 (0:00:00.164) 0:00:05.467 ******** 2026-01-02 00:43:46.641510 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:43:46.641521 | orchestrator | 2026-01-02 00:43:46.641531 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-02 00:43:46.641542 | orchestrator | Friday 02 January 2026 00:43:45 +0000 (0:00:00.174) 0:00:05.641 ******** 2026-01-02 00:43:46.641561 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:43:46.641572 | orchestrator | 2026-01-02 00:43:46.641583 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-02 00:43:46.641594 | orchestrator | Friday 02 January 2026 00:43:46 +0000 (0:00:00.165) 0:00:05.807 ******** 2026-01-02 00:43:46.641604 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:43:46.641615 | orchestrator | 2026-01-02 00:43:46.641626 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-02 00:43:46.641637 | orchestrator | Friday 02 January 2026 00:43:46 +0000 (0:00:00.153) 0:00:05.960 ******** 2026-01-02 00:43:46.641647 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:43:46.641658 | orchestrator | 2026-01-02 00:43:46.641669 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-02 00:43:46.641680 | orchestrator | Friday 02 January 2026 00:43:46 +0000 (0:00:00.174) 0:00:06.135 ******** 2026-01-02 00:43:46.641691 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:43:46.641701 | orchestrator | 2026-01-02 00:43:46.641720 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-02 00:43:54.074651 | orchestrator | Friday 02 January 2026 00:43:46 +0000 (0:00:00.195) 0:00:06.331 ******** 2026-01-02 00:43:54.074765 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:43:54.074784 | orchestrator | 2026-01-02 00:43:54.074797 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-02 00:43:54.074809 | orchestrator | Friday 02 January 2026 00:43:46 +0000 (0:00:00.170) 0:00:06.501 ******** 2026-01-02 00:43:54.074821 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-01-02 00:43:54.074832 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-01-02 00:43:54.074844 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-01-02 00:43:54.074855 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-01-02 00:43:54.074866 | orchestrator | 2026-01-02 00:43:54.074884 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-02 00:43:54.074902 | orchestrator | Friday 02 January 2026 00:43:47 +0000 (0:00:00.957) 0:00:07.458 ******** 2026-01-02 00:43:54.074920 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:43:54.074938 | orchestrator | 2026-01-02 00:43:54.074958 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-02 00:43:54.074970 | orchestrator | Friday 02 January 2026 00:43:47 +0000 (0:00:00.212) 0:00:07.671 ******** 2026-01-02 00:43:54.074980 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:43:54.074991 | orchestrator | 2026-01-02 00:43:54.075002 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-02 00:43:54.075014 | orchestrator | Friday 02 January 2026 00:43:48 +0000 (0:00:00.197) 0:00:07.869 ******** 2026-01-02 00:43:54.075025 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:43:54.075036 | orchestrator | 2026-01-02 00:43:54.075047 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-02 00:43:54.075058 | orchestrator | Friday 02 January 2026 00:43:48 +0000 (0:00:00.176) 0:00:08.045 ******** 2026-01-02 00:43:54.075116 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:43:54.075129 | orchestrator | 2026-01-02 00:43:54.075140 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-01-02 00:43:54.075152 | orchestrator | Friday 02 January 2026 00:43:48 +0000 (0:00:00.180) 0:00:08.226 ******** 2026-01-02 00:43:54.075163 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:43:54.075174 | orchestrator | 2026-01-02 00:43:54.075188 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-01-02 00:43:54.075202 | orchestrator | Friday 02 January 2026 00:43:48 +0000 (0:00:00.108) 0:00:08.334 ******** 2026-01-02 00:43:54.075215 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'fa5ccc98-5ec0-5843-b525-cc12dffb9804'}}) 2026-01-02 00:43:54.075228 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '1e1b73ff-0d48-5f4d-91db-a8c1f08fc0ce'}}) 2026-01-02 00:43:54.075241 | orchestrator | 2026-01-02 00:43:54.075254 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-01-02 00:43:54.075293 | orchestrator | Friday 02 January 2026 00:43:48 +0000 (0:00:00.159) 0:00:08.494 ******** 2026-01-02 00:43:54.075309 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-fa5ccc98-5ec0-5843-b525-cc12dffb9804', 'data_vg': 'ceph-fa5ccc98-5ec0-5843-b525-cc12dffb9804'}) 2026-01-02 00:43:54.075324 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-1e1b73ff-0d48-5f4d-91db-a8c1f08fc0ce', 'data_vg': 'ceph-1e1b73ff-0d48-5f4d-91db-a8c1f08fc0ce'}) 2026-01-02 00:43:54.075336 | orchestrator | 2026-01-02 00:43:54.075347 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-01-02 00:43:54.075358 | orchestrator | Friday 02 January 2026 00:43:50 +0000 (0:00:02.048) 0:00:10.543 ******** 2026-01-02 00:43:54.075369 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-fa5ccc98-5ec0-5843-b525-cc12dffb9804', 'data_vg': 'ceph-fa5ccc98-5ec0-5843-b525-cc12dffb9804'})  2026-01-02 00:43:54.075381 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1e1b73ff-0d48-5f4d-91db-a8c1f08fc0ce', 'data_vg': 'ceph-1e1b73ff-0d48-5f4d-91db-a8c1f08fc0ce'})  2026-01-02 00:43:54.075393 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:43:54.075403 | orchestrator | 2026-01-02 00:43:54.075414 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-01-02 00:43:54.075425 | orchestrator | Friday 02 January 2026 00:43:50 +0000 (0:00:00.136) 0:00:10.679 ******** 2026-01-02 00:43:54.075436 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-fa5ccc98-5ec0-5843-b525-cc12dffb9804', 'data_vg': 'ceph-fa5ccc98-5ec0-5843-b525-cc12dffb9804'}) 2026-01-02 00:43:54.075447 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-1e1b73ff-0d48-5f4d-91db-a8c1f08fc0ce', 'data_vg': 'ceph-1e1b73ff-0d48-5f4d-91db-a8c1f08fc0ce'}) 2026-01-02 00:43:54.075458 | orchestrator | 2026-01-02 00:43:54.075470 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-01-02 00:43:54.075481 | orchestrator | Friday 02 January 2026 00:43:52 +0000 (0:00:01.417) 0:00:12.096 ******** 2026-01-02 00:43:54.075491 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-fa5ccc98-5ec0-5843-b525-cc12dffb9804', 'data_vg': 'ceph-fa5ccc98-5ec0-5843-b525-cc12dffb9804'})  2026-01-02 00:43:54.075502 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1e1b73ff-0d48-5f4d-91db-a8c1f08fc0ce', 'data_vg': 'ceph-1e1b73ff-0d48-5f4d-91db-a8c1f08fc0ce'})  2026-01-02 00:43:54.075513 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:43:54.075524 | orchestrator | 2026-01-02 00:43:54.075535 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-01-02 00:43:54.075546 | orchestrator | Friday 02 January 2026 00:43:52 +0000 (0:00:00.125) 0:00:12.222 ******** 2026-01-02 00:43:54.075577 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:43:54.075589 | orchestrator | 2026-01-02 00:43:54.075600 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-01-02 00:43:54.075611 | orchestrator | Friday 02 January 2026 00:43:52 +0000 (0:00:00.123) 0:00:12.346 ******** 2026-01-02 00:43:54.075622 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-fa5ccc98-5ec0-5843-b525-cc12dffb9804', 'data_vg': 'ceph-fa5ccc98-5ec0-5843-b525-cc12dffb9804'})  2026-01-02 00:43:54.075633 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1e1b73ff-0d48-5f4d-91db-a8c1f08fc0ce', 'data_vg': 'ceph-1e1b73ff-0d48-5f4d-91db-a8c1f08fc0ce'})  2026-01-02 00:43:54.075644 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:43:54.075655 | orchestrator | 2026-01-02 00:43:54.075666 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-01-02 00:43:54.075676 | orchestrator | Friday 02 January 2026 00:43:52 +0000 (0:00:00.244) 0:00:12.590 ******** 2026-01-02 00:43:54.075687 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:43:54.075698 | orchestrator | 2026-01-02 00:43:54.075709 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-01-02 00:43:54.075720 | orchestrator | Friday 02 January 2026 00:43:53 +0000 (0:00:00.114) 0:00:12.704 ******** 2026-01-02 00:43:54.075741 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-fa5ccc98-5ec0-5843-b525-cc12dffb9804', 'data_vg': 'ceph-fa5ccc98-5ec0-5843-b525-cc12dffb9804'})  2026-01-02 00:43:54.075752 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1e1b73ff-0d48-5f4d-91db-a8c1f08fc0ce', 'data_vg': 'ceph-1e1b73ff-0d48-5f4d-91db-a8c1f08fc0ce'})  2026-01-02 00:43:54.075763 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:43:54.075774 | orchestrator | 2026-01-02 00:43:54.075785 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-01-02 00:43:54.075796 | orchestrator | Friday 02 January 2026 00:43:53 +0000 (0:00:00.118) 0:00:12.823 ******** 2026-01-02 00:43:54.075807 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:43:54.075818 | orchestrator | 2026-01-02 00:43:54.075829 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-01-02 00:43:54.075840 | orchestrator | Friday 02 January 2026 00:43:53 +0000 (0:00:00.133) 0:00:12.956 ******** 2026-01-02 00:43:54.075851 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-fa5ccc98-5ec0-5843-b525-cc12dffb9804', 'data_vg': 'ceph-fa5ccc98-5ec0-5843-b525-cc12dffb9804'})  2026-01-02 00:43:54.075862 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1e1b73ff-0d48-5f4d-91db-a8c1f08fc0ce', 'data_vg': 'ceph-1e1b73ff-0d48-5f4d-91db-a8c1f08fc0ce'})  2026-01-02 00:43:54.075873 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:43:54.075884 | orchestrator | 2026-01-02 00:43:54.075895 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-01-02 00:43:54.075906 | orchestrator | Friday 02 January 2026 00:43:53 +0000 (0:00:00.150) 0:00:13.107 ******** 2026-01-02 00:43:54.075917 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:43:54.075928 | orchestrator | 2026-01-02 00:43:54.075939 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-01-02 00:43:54.075969 | orchestrator | Friday 02 January 2026 00:43:53 +0000 (0:00:00.125) 0:00:13.233 ******** 2026-01-02 00:43:54.075986 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-fa5ccc98-5ec0-5843-b525-cc12dffb9804', 'data_vg': 'ceph-fa5ccc98-5ec0-5843-b525-cc12dffb9804'})  2026-01-02 00:43:54.075998 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1e1b73ff-0d48-5f4d-91db-a8c1f08fc0ce', 'data_vg': 'ceph-1e1b73ff-0d48-5f4d-91db-a8c1f08fc0ce'})  2026-01-02 00:43:54.076009 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:43:54.076020 | orchestrator | 2026-01-02 00:43:54.076031 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-01-02 00:43:54.076042 | orchestrator | Friday 02 January 2026 00:43:53 +0000 (0:00:00.126) 0:00:13.359 ******** 2026-01-02 00:43:54.076053 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-fa5ccc98-5ec0-5843-b525-cc12dffb9804', 'data_vg': 'ceph-fa5ccc98-5ec0-5843-b525-cc12dffb9804'})  2026-01-02 00:43:54.076064 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1e1b73ff-0d48-5f4d-91db-a8c1f08fc0ce', 'data_vg': 'ceph-1e1b73ff-0d48-5f4d-91db-a8c1f08fc0ce'})  2026-01-02 00:43:54.076103 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:43:54.076120 | orchestrator | 2026-01-02 00:43:54.076138 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-01-02 00:43:54.076158 | orchestrator | Friday 02 January 2026 00:43:53 +0000 (0:00:00.135) 0:00:13.495 ******** 2026-01-02 00:43:54.076175 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-fa5ccc98-5ec0-5843-b525-cc12dffb9804', 'data_vg': 'ceph-fa5ccc98-5ec0-5843-b525-cc12dffb9804'})  2026-01-02 00:43:54.076190 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1e1b73ff-0d48-5f4d-91db-a8c1f08fc0ce', 'data_vg': 'ceph-1e1b73ff-0d48-5f4d-91db-a8c1f08fc0ce'})  2026-01-02 00:43:54.076201 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:43:54.076212 | orchestrator | 2026-01-02 00:43:54.076223 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-01-02 00:43:54.076243 | orchestrator | Friday 02 January 2026 00:43:53 +0000 (0:00:00.133) 0:00:13.629 ******** 2026-01-02 00:43:54.076255 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:43:54.076265 | orchestrator | 2026-01-02 00:43:54.076276 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-01-02 00:43:54.076295 | orchestrator | Friday 02 January 2026 00:43:54 +0000 (0:00:00.132) 0:00:13.761 ******** 2026-01-02 00:44:00.083295 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:44:00.083411 | orchestrator | 2026-01-02 00:44:00.083427 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-01-02 00:44:00.083440 | orchestrator | Friday 02 January 2026 00:43:54 +0000 (0:00:00.123) 0:00:13.884 ******** 2026-01-02 00:44:00.083451 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:44:00.083462 | orchestrator | 2026-01-02 00:44:00.083474 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-01-02 00:44:00.083485 | orchestrator | Friday 02 January 2026 00:43:54 +0000 (0:00:00.129) 0:00:14.014 ******** 2026-01-02 00:44:00.083497 | orchestrator | ok: [testbed-node-3] => { 2026-01-02 00:44:00.083508 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-01-02 00:44:00.083520 | orchestrator | } 2026-01-02 00:44:00.083531 | orchestrator | 2026-01-02 00:44:00.083542 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-01-02 00:44:00.083554 | orchestrator | Friday 02 January 2026 00:43:54 +0000 (0:00:00.250) 0:00:14.265 ******** 2026-01-02 00:44:00.083565 | orchestrator | ok: [testbed-node-3] => { 2026-01-02 00:44:00.083576 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-01-02 00:44:00.083587 | orchestrator | } 2026-01-02 00:44:00.083598 | orchestrator | 2026-01-02 00:44:00.083609 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-01-02 00:44:00.083620 | orchestrator | Friday 02 January 2026 00:43:54 +0000 (0:00:00.134) 0:00:14.399 ******** 2026-01-02 00:44:00.083633 | orchestrator | ok: [testbed-node-3] => { 2026-01-02 00:44:00.083644 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-01-02 00:44:00.083655 | orchestrator | } 2026-01-02 00:44:00.083666 | orchestrator | 2026-01-02 00:44:00.083678 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-01-02 00:44:00.083689 | orchestrator | Friday 02 January 2026 00:43:54 +0000 (0:00:00.137) 0:00:14.537 ******** 2026-01-02 00:44:00.083700 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:44:00.083711 | orchestrator | 2026-01-02 00:44:00.083722 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-01-02 00:44:00.083733 | orchestrator | Friday 02 January 2026 00:43:55 +0000 (0:00:00.626) 0:00:15.164 ******** 2026-01-02 00:44:00.083744 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:44:00.083756 | orchestrator | 2026-01-02 00:44:00.083767 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-01-02 00:44:00.083778 | orchestrator | Friday 02 January 2026 00:43:55 +0000 (0:00:00.497) 0:00:15.661 ******** 2026-01-02 00:44:00.083789 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:44:00.083800 | orchestrator | 2026-01-02 00:44:00.083811 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-01-02 00:44:00.083824 | orchestrator | Friday 02 January 2026 00:43:56 +0000 (0:00:00.503) 0:00:16.165 ******** 2026-01-02 00:44:00.083838 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:44:00.083851 | orchestrator | 2026-01-02 00:44:00.083865 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-01-02 00:44:00.083879 | orchestrator | Friday 02 January 2026 00:43:56 +0000 (0:00:00.128) 0:00:16.293 ******** 2026-01-02 00:44:00.083893 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:44:00.083906 | orchestrator | 2026-01-02 00:44:00.083920 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-01-02 00:44:00.083934 | orchestrator | Friday 02 January 2026 00:43:56 +0000 (0:00:00.099) 0:00:16.392 ******** 2026-01-02 00:44:00.083947 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:44:00.083961 | orchestrator | 2026-01-02 00:44:00.083974 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-01-02 00:44:00.084045 | orchestrator | Friday 02 January 2026 00:43:56 +0000 (0:00:00.104) 0:00:16.497 ******** 2026-01-02 00:44:00.084099 | orchestrator | ok: [testbed-node-3] => { 2026-01-02 00:44:00.084115 | orchestrator |  "vgs_report": { 2026-01-02 00:44:00.084128 | orchestrator |  "vg": [] 2026-01-02 00:44:00.084141 | orchestrator |  } 2026-01-02 00:44:00.084153 | orchestrator | } 2026-01-02 00:44:00.084166 | orchestrator | 2026-01-02 00:44:00.084179 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-01-02 00:44:00.084193 | orchestrator | Friday 02 January 2026 00:43:56 +0000 (0:00:00.135) 0:00:16.632 ******** 2026-01-02 00:44:00.084204 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:44:00.084215 | orchestrator | 2026-01-02 00:44:00.084226 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-01-02 00:44:00.084237 | orchestrator | Friday 02 January 2026 00:43:57 +0000 (0:00:00.113) 0:00:16.745 ******** 2026-01-02 00:44:00.084248 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:44:00.084259 | orchestrator | 2026-01-02 00:44:00.084270 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-01-02 00:44:00.084281 | orchestrator | Friday 02 January 2026 00:43:57 +0000 (0:00:00.116) 0:00:16.862 ******** 2026-01-02 00:44:00.084292 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:44:00.084303 | orchestrator | 2026-01-02 00:44:00.084314 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-01-02 00:44:00.084325 | orchestrator | Friday 02 January 2026 00:43:57 +0000 (0:00:00.219) 0:00:17.081 ******** 2026-01-02 00:44:00.084336 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:44:00.084347 | orchestrator | 2026-01-02 00:44:00.084358 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-01-02 00:44:00.084369 | orchestrator | Friday 02 January 2026 00:43:57 +0000 (0:00:00.126) 0:00:17.208 ******** 2026-01-02 00:44:00.084380 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:44:00.084391 | orchestrator | 2026-01-02 00:44:00.084407 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-01-02 00:44:00.084423 | orchestrator | Friday 02 January 2026 00:43:57 +0000 (0:00:00.121) 0:00:17.330 ******** 2026-01-02 00:44:00.084434 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:44:00.084445 | orchestrator | 2026-01-02 00:44:00.084457 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-01-02 00:44:00.084467 | orchestrator | Friday 02 January 2026 00:43:57 +0000 (0:00:00.130) 0:00:17.460 ******** 2026-01-02 00:44:00.084478 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:44:00.084489 | orchestrator | 2026-01-02 00:44:00.084500 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-01-02 00:44:00.084511 | orchestrator | Friday 02 January 2026 00:43:57 +0000 (0:00:00.126) 0:00:17.587 ******** 2026-01-02 00:44:00.084543 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:44:00.084555 | orchestrator | 2026-01-02 00:44:00.084566 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-01-02 00:44:00.084577 | orchestrator | Friday 02 January 2026 00:43:58 +0000 (0:00:00.113) 0:00:17.700 ******** 2026-01-02 00:44:00.084588 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:44:00.084599 | orchestrator | 2026-01-02 00:44:00.084610 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-01-02 00:44:00.084621 | orchestrator | Friday 02 January 2026 00:43:58 +0000 (0:00:00.149) 0:00:17.849 ******** 2026-01-02 00:44:00.084632 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:44:00.084643 | orchestrator | 2026-01-02 00:44:00.084710 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-01-02 00:44:00.084724 | orchestrator | Friday 02 January 2026 00:43:58 +0000 (0:00:00.152) 0:00:18.002 ******** 2026-01-02 00:44:00.084735 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:44:00.084746 | orchestrator | 2026-01-02 00:44:00.084757 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-01-02 00:44:00.084768 | orchestrator | Friday 02 January 2026 00:43:58 +0000 (0:00:00.148) 0:00:18.151 ******** 2026-01-02 00:44:00.084791 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:44:00.084802 | orchestrator | 2026-01-02 00:44:00.084813 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-01-02 00:44:00.084825 | orchestrator | Friday 02 January 2026 00:43:58 +0000 (0:00:00.158) 0:00:18.309 ******** 2026-01-02 00:44:00.084835 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:44:00.084846 | orchestrator | 2026-01-02 00:44:00.084857 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-01-02 00:44:00.084868 | orchestrator | Friday 02 January 2026 00:43:58 +0000 (0:00:00.167) 0:00:18.477 ******** 2026-01-02 00:44:00.084880 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:44:00.084890 | orchestrator | 2026-01-02 00:44:00.084901 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-01-02 00:44:00.084912 | orchestrator | Friday 02 January 2026 00:43:58 +0000 (0:00:00.142) 0:00:18.619 ******** 2026-01-02 00:44:00.084925 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-fa5ccc98-5ec0-5843-b525-cc12dffb9804', 'data_vg': 'ceph-fa5ccc98-5ec0-5843-b525-cc12dffb9804'})  2026-01-02 00:44:00.084938 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1e1b73ff-0d48-5f4d-91db-a8c1f08fc0ce', 'data_vg': 'ceph-1e1b73ff-0d48-5f4d-91db-a8c1f08fc0ce'})  2026-01-02 00:44:00.084949 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:44:00.084960 | orchestrator | 2026-01-02 00:44:00.084971 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-01-02 00:44:00.084982 | orchestrator | Friday 02 January 2026 00:43:59 +0000 (0:00:00.332) 0:00:18.952 ******** 2026-01-02 00:44:00.084994 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-fa5ccc98-5ec0-5843-b525-cc12dffb9804', 'data_vg': 'ceph-fa5ccc98-5ec0-5843-b525-cc12dffb9804'})  2026-01-02 00:44:00.085005 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1e1b73ff-0d48-5f4d-91db-a8c1f08fc0ce', 'data_vg': 'ceph-1e1b73ff-0d48-5f4d-91db-a8c1f08fc0ce'})  2026-01-02 00:44:00.085016 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:44:00.085027 | orchestrator | 2026-01-02 00:44:00.085038 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-01-02 00:44:00.085049 | orchestrator | Friday 02 January 2026 00:43:59 +0000 (0:00:00.167) 0:00:19.119 ******** 2026-01-02 00:44:00.085060 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-fa5ccc98-5ec0-5843-b525-cc12dffb9804', 'data_vg': 'ceph-fa5ccc98-5ec0-5843-b525-cc12dffb9804'})  2026-01-02 00:44:00.085102 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1e1b73ff-0d48-5f4d-91db-a8c1f08fc0ce', 'data_vg': 'ceph-1e1b73ff-0d48-5f4d-91db-a8c1f08fc0ce'})  2026-01-02 00:44:00.085122 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:44:00.085141 | orchestrator | 2026-01-02 00:44:00.085159 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-01-02 00:44:00.085172 | orchestrator | Friday 02 January 2026 00:43:59 +0000 (0:00:00.168) 0:00:19.287 ******** 2026-01-02 00:44:00.085183 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-fa5ccc98-5ec0-5843-b525-cc12dffb9804', 'data_vg': 'ceph-fa5ccc98-5ec0-5843-b525-cc12dffb9804'})  2026-01-02 00:44:00.085194 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1e1b73ff-0d48-5f4d-91db-a8c1f08fc0ce', 'data_vg': 'ceph-1e1b73ff-0d48-5f4d-91db-a8c1f08fc0ce'})  2026-01-02 00:44:00.085205 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:44:00.085216 | orchestrator | 2026-01-02 00:44:00.085227 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-01-02 00:44:00.085238 | orchestrator | Friday 02 January 2026 00:43:59 +0000 (0:00:00.147) 0:00:19.435 ******** 2026-01-02 00:44:00.085249 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-fa5ccc98-5ec0-5843-b525-cc12dffb9804', 'data_vg': 'ceph-fa5ccc98-5ec0-5843-b525-cc12dffb9804'})  2026-01-02 00:44:00.085260 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1e1b73ff-0d48-5f4d-91db-a8c1f08fc0ce', 'data_vg': 'ceph-1e1b73ff-0d48-5f4d-91db-a8c1f08fc0ce'})  2026-01-02 00:44:00.085279 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:44:00.085290 | orchestrator | 2026-01-02 00:44:00.085301 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-01-02 00:44:00.085322 | orchestrator | Friday 02 January 2026 00:43:59 +0000 (0:00:00.176) 0:00:19.611 ******** 2026-01-02 00:44:00.085342 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-fa5ccc98-5ec0-5843-b525-cc12dffb9804', 'data_vg': 'ceph-fa5ccc98-5ec0-5843-b525-cc12dffb9804'})  2026-01-02 00:44:05.368384 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1e1b73ff-0d48-5f4d-91db-a8c1f08fc0ce', 'data_vg': 'ceph-1e1b73ff-0d48-5f4d-91db-a8c1f08fc0ce'})  2026-01-02 00:44:05.368484 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:44:05.368497 | orchestrator | 2026-01-02 00:44:05.368506 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-01-02 00:44:05.368516 | orchestrator | Friday 02 January 2026 00:44:00 +0000 (0:00:00.161) 0:00:19.773 ******** 2026-01-02 00:44:05.368524 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-fa5ccc98-5ec0-5843-b525-cc12dffb9804', 'data_vg': 'ceph-fa5ccc98-5ec0-5843-b525-cc12dffb9804'})  2026-01-02 00:44:05.368532 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1e1b73ff-0d48-5f4d-91db-a8c1f08fc0ce', 'data_vg': 'ceph-1e1b73ff-0d48-5f4d-91db-a8c1f08fc0ce'})  2026-01-02 00:44:05.368540 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:44:05.368547 | orchestrator | 2026-01-02 00:44:05.368555 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-01-02 00:44:05.368562 | orchestrator | Friday 02 January 2026 00:44:00 +0000 (0:00:00.179) 0:00:19.952 ******** 2026-01-02 00:44:05.368570 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-fa5ccc98-5ec0-5843-b525-cc12dffb9804', 'data_vg': 'ceph-fa5ccc98-5ec0-5843-b525-cc12dffb9804'})  2026-01-02 00:44:05.368578 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1e1b73ff-0d48-5f4d-91db-a8c1f08fc0ce', 'data_vg': 'ceph-1e1b73ff-0d48-5f4d-91db-a8c1f08fc0ce'})  2026-01-02 00:44:05.368585 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:44:05.368592 | orchestrator | 2026-01-02 00:44:05.368600 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-01-02 00:44:05.368607 | orchestrator | Friday 02 January 2026 00:44:00 +0000 (0:00:00.149) 0:00:20.102 ******** 2026-01-02 00:44:05.368614 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:44:05.368623 | orchestrator | 2026-01-02 00:44:05.368630 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-01-02 00:44:05.368637 | orchestrator | Friday 02 January 2026 00:44:00 +0000 (0:00:00.561) 0:00:20.664 ******** 2026-01-02 00:44:05.368645 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:44:05.368652 | orchestrator | 2026-01-02 00:44:05.368659 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-01-02 00:44:05.368666 | orchestrator | Friday 02 January 2026 00:44:01 +0000 (0:00:00.523) 0:00:21.187 ******** 2026-01-02 00:44:05.368674 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:44:05.368681 | orchestrator | 2026-01-02 00:44:05.368688 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-01-02 00:44:05.368695 | orchestrator | Friday 02 January 2026 00:44:01 +0000 (0:00:00.149) 0:00:21.337 ******** 2026-01-02 00:44:05.368703 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-1e1b73ff-0d48-5f4d-91db-a8c1f08fc0ce', 'vg_name': 'ceph-1e1b73ff-0d48-5f4d-91db-a8c1f08fc0ce'}) 2026-01-02 00:44:05.368725 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-fa5ccc98-5ec0-5843-b525-cc12dffb9804', 'vg_name': 'ceph-fa5ccc98-5ec0-5843-b525-cc12dffb9804'}) 2026-01-02 00:44:05.368733 | orchestrator | 2026-01-02 00:44:05.368740 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-01-02 00:44:05.368747 | orchestrator | Friday 02 January 2026 00:44:01 +0000 (0:00:00.180) 0:00:21.517 ******** 2026-01-02 00:44:05.368774 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-fa5ccc98-5ec0-5843-b525-cc12dffb9804', 'data_vg': 'ceph-fa5ccc98-5ec0-5843-b525-cc12dffb9804'})  2026-01-02 00:44:05.368783 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1e1b73ff-0d48-5f4d-91db-a8c1f08fc0ce', 'data_vg': 'ceph-1e1b73ff-0d48-5f4d-91db-a8c1f08fc0ce'})  2026-01-02 00:44:05.368790 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:44:05.368797 | orchestrator | 2026-01-02 00:44:05.368805 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-01-02 00:44:05.368812 | orchestrator | Friday 02 January 2026 00:44:02 +0000 (0:00:00.360) 0:00:21.877 ******** 2026-01-02 00:44:05.368819 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-fa5ccc98-5ec0-5843-b525-cc12dffb9804', 'data_vg': 'ceph-fa5ccc98-5ec0-5843-b525-cc12dffb9804'})  2026-01-02 00:44:05.368826 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1e1b73ff-0d48-5f4d-91db-a8c1f08fc0ce', 'data_vg': 'ceph-1e1b73ff-0d48-5f4d-91db-a8c1f08fc0ce'})  2026-01-02 00:44:05.368834 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:44:05.368842 | orchestrator | 2026-01-02 00:44:05.368849 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-01-02 00:44:05.368856 | orchestrator | Friday 02 January 2026 00:44:02 +0000 (0:00:00.166) 0:00:22.044 ******** 2026-01-02 00:44:05.368863 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-fa5ccc98-5ec0-5843-b525-cc12dffb9804', 'data_vg': 'ceph-fa5ccc98-5ec0-5843-b525-cc12dffb9804'})  2026-01-02 00:44:05.368871 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1e1b73ff-0d48-5f4d-91db-a8c1f08fc0ce', 'data_vg': 'ceph-1e1b73ff-0d48-5f4d-91db-a8c1f08fc0ce'})  2026-01-02 00:44:05.368878 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:44:05.368885 | orchestrator | 2026-01-02 00:44:05.368893 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-01-02 00:44:05.368900 | orchestrator | Friday 02 January 2026 00:44:02 +0000 (0:00:00.138) 0:00:22.182 ******** 2026-01-02 00:44:05.368920 | orchestrator | ok: [testbed-node-3] => { 2026-01-02 00:44:05.368928 | orchestrator |  "lvm_report": { 2026-01-02 00:44:05.368937 | orchestrator |  "lv": [ 2026-01-02 00:44:05.368946 | orchestrator |  { 2026-01-02 00:44:05.368954 | orchestrator |  "lv_name": "osd-block-1e1b73ff-0d48-5f4d-91db-a8c1f08fc0ce", 2026-01-02 00:44:05.368964 | orchestrator |  "vg_name": "ceph-1e1b73ff-0d48-5f4d-91db-a8c1f08fc0ce" 2026-01-02 00:44:05.368972 | orchestrator |  }, 2026-01-02 00:44:05.368982 | orchestrator |  { 2026-01-02 00:44:05.368990 | orchestrator |  "lv_name": "osd-block-fa5ccc98-5ec0-5843-b525-cc12dffb9804", 2026-01-02 00:44:05.368999 | orchestrator |  "vg_name": "ceph-fa5ccc98-5ec0-5843-b525-cc12dffb9804" 2026-01-02 00:44:05.369008 | orchestrator |  } 2026-01-02 00:44:05.369017 | orchestrator |  ], 2026-01-02 00:44:05.369025 | orchestrator |  "pv": [ 2026-01-02 00:44:05.369034 | orchestrator |  { 2026-01-02 00:44:05.369043 | orchestrator |  "pv_name": "/dev/sdb", 2026-01-02 00:44:05.369052 | orchestrator |  "vg_name": "ceph-fa5ccc98-5ec0-5843-b525-cc12dffb9804" 2026-01-02 00:44:05.369060 | orchestrator |  }, 2026-01-02 00:44:05.369089 | orchestrator |  { 2026-01-02 00:44:05.369098 | orchestrator |  "pv_name": "/dev/sdc", 2026-01-02 00:44:05.369106 | orchestrator |  "vg_name": "ceph-1e1b73ff-0d48-5f4d-91db-a8c1f08fc0ce" 2026-01-02 00:44:05.369115 | orchestrator |  } 2026-01-02 00:44:05.369124 | orchestrator |  ] 2026-01-02 00:44:05.369132 | orchestrator |  } 2026-01-02 00:44:05.369140 | orchestrator | } 2026-01-02 00:44:05.369149 | orchestrator | 2026-01-02 00:44:05.369157 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-01-02 00:44:05.369165 | orchestrator | 2026-01-02 00:44:05.369174 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-01-02 00:44:05.369188 | orchestrator | Friday 02 January 2026 00:44:02 +0000 (0:00:00.314) 0:00:22.497 ******** 2026-01-02 00:44:05.369197 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-01-02 00:44:05.369205 | orchestrator | 2026-01-02 00:44:05.369214 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-01-02 00:44:05.369222 | orchestrator | Friday 02 January 2026 00:44:03 +0000 (0:00:00.251) 0:00:22.749 ******** 2026-01-02 00:44:05.369231 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:44:05.369239 | orchestrator | 2026-01-02 00:44:05.369249 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-02 00:44:05.369257 | orchestrator | Friday 02 January 2026 00:44:03 +0000 (0:00:00.255) 0:00:23.004 ******** 2026-01-02 00:44:05.369266 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-01-02 00:44:05.369275 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-01-02 00:44:05.369284 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-01-02 00:44:05.369292 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-01-02 00:44:05.369301 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-01-02 00:44:05.369310 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-01-02 00:44:05.369323 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-01-02 00:44:05.369330 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-01-02 00:44:05.369338 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-01-02 00:44:05.369345 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-01-02 00:44:05.369352 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-01-02 00:44:05.369360 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-01-02 00:44:05.369367 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-01-02 00:44:05.369374 | orchestrator | 2026-01-02 00:44:05.369382 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-02 00:44:05.369389 | orchestrator | Friday 02 January 2026 00:44:03 +0000 (0:00:00.430) 0:00:23.434 ******** 2026-01-02 00:44:05.369396 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:44:05.369404 | orchestrator | 2026-01-02 00:44:05.369411 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-02 00:44:05.369418 | orchestrator | Friday 02 January 2026 00:44:03 +0000 (0:00:00.186) 0:00:23.621 ******** 2026-01-02 00:44:05.369426 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:44:05.369433 | orchestrator | 2026-01-02 00:44:05.369440 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-02 00:44:05.369448 | orchestrator | Friday 02 January 2026 00:44:04 +0000 (0:00:00.241) 0:00:23.862 ******** 2026-01-02 00:44:05.369455 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:44:05.369462 | orchestrator | 2026-01-02 00:44:05.369470 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-02 00:44:05.369477 | orchestrator | Friday 02 January 2026 00:44:04 +0000 (0:00:00.584) 0:00:24.447 ******** 2026-01-02 00:44:05.369484 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:44:05.369491 | orchestrator | 2026-01-02 00:44:05.369499 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-02 00:44:05.369506 | orchestrator | Friday 02 January 2026 00:44:04 +0000 (0:00:00.197) 0:00:24.644 ******** 2026-01-02 00:44:05.369513 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:44:05.369521 | orchestrator | 2026-01-02 00:44:05.369528 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-02 00:44:05.369540 | orchestrator | Friday 02 January 2026 00:44:05 +0000 (0:00:00.209) 0:00:24.854 ******** 2026-01-02 00:44:05.369548 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:44:05.369556 | orchestrator | 2026-01-02 00:44:05.369568 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-02 00:44:16.844437 | orchestrator | Friday 02 January 2026 00:44:05 +0000 (0:00:00.202) 0:00:25.057 ******** 2026-01-02 00:44:16.844566 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:44:16.844591 | orchestrator | 2026-01-02 00:44:16.844610 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-02 00:44:16.844639 | orchestrator | Friday 02 January 2026 00:44:05 +0000 (0:00:00.221) 0:00:25.279 ******** 2026-01-02 00:44:16.844654 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:44:16.844669 | orchestrator | 2026-01-02 00:44:16.844685 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-02 00:44:16.844701 | orchestrator | Friday 02 January 2026 00:44:05 +0000 (0:00:00.213) 0:00:25.492 ******** 2026-01-02 00:44:16.844717 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_33c14286-c543-4ffd-bb9f-b1db90f604b2) 2026-01-02 00:44:16.844735 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_33c14286-c543-4ffd-bb9f-b1db90f604b2) 2026-01-02 00:44:16.844750 | orchestrator | 2026-01-02 00:44:16.844764 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-02 00:44:16.844777 | orchestrator | Friday 02 January 2026 00:44:06 +0000 (0:00:00.436) 0:00:25.929 ******** 2026-01-02 00:44:16.844792 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_a863269e-8a4c-456a-8159-1ce463f39daf) 2026-01-02 00:44:16.844806 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_a863269e-8a4c-456a-8159-1ce463f39daf) 2026-01-02 00:44:16.844821 | orchestrator | 2026-01-02 00:44:16.844836 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-02 00:44:16.844851 | orchestrator | Friday 02 January 2026 00:44:06 +0000 (0:00:00.438) 0:00:26.368 ******** 2026-01-02 00:44:16.844867 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_2fd5b446-fd37-4cff-9553-5df2f9404005) 2026-01-02 00:44:16.844883 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_2fd5b446-fd37-4cff-9553-5df2f9404005) 2026-01-02 00:44:16.844899 | orchestrator | 2026-01-02 00:44:16.844917 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-02 00:44:16.844934 | orchestrator | Friday 02 January 2026 00:44:07 +0000 (0:00:00.449) 0:00:26.817 ******** 2026-01-02 00:44:16.844950 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_1ff5c3cd-25d4-4cde-ba93-f5b1aecf565f) 2026-01-02 00:44:16.844966 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_1ff5c3cd-25d4-4cde-ba93-f5b1aecf565f) 2026-01-02 00:44:16.844982 | orchestrator | 2026-01-02 00:44:16.844999 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-02 00:44:16.845016 | orchestrator | Friday 02 January 2026 00:44:07 +0000 (0:00:00.780) 0:00:27.597 ******** 2026-01-02 00:44:16.845032 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-01-02 00:44:16.845050 | orchestrator | 2026-01-02 00:44:16.845112 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-02 00:44:16.845130 | orchestrator | Friday 02 January 2026 00:44:08 +0000 (0:00:00.558) 0:00:28.156 ******** 2026-01-02 00:44:16.845146 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-01-02 00:44:16.845164 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-01-02 00:44:16.845181 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-01-02 00:44:16.845198 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-01-02 00:44:16.845215 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-01-02 00:44:16.845314 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-01-02 00:44:16.845331 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-01-02 00:44:16.845340 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-01-02 00:44:16.845347 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-01-02 00:44:16.845355 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-01-02 00:44:16.845363 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-01-02 00:44:16.845371 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-01-02 00:44:16.845379 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-01-02 00:44:16.845387 | orchestrator | 2026-01-02 00:44:16.845395 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-02 00:44:16.845403 | orchestrator | Friday 02 January 2026 00:44:09 +0000 (0:00:00.592) 0:00:28.749 ******** 2026-01-02 00:44:16.845415 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:44:16.845429 | orchestrator | 2026-01-02 00:44:16.845442 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-02 00:44:16.845456 | orchestrator | Friday 02 January 2026 00:44:09 +0000 (0:00:00.244) 0:00:28.993 ******** 2026-01-02 00:44:16.845468 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:44:16.845479 | orchestrator | 2026-01-02 00:44:16.845492 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-02 00:44:16.845505 | orchestrator | Friday 02 January 2026 00:44:09 +0000 (0:00:00.218) 0:00:29.212 ******** 2026-01-02 00:44:16.845516 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:44:16.845528 | orchestrator | 2026-01-02 00:44:16.845567 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-02 00:44:16.845583 | orchestrator | Friday 02 January 2026 00:44:09 +0000 (0:00:00.233) 0:00:29.445 ******** 2026-01-02 00:44:16.845597 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:44:16.845612 | orchestrator | 2026-01-02 00:44:16.845626 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-02 00:44:16.845639 | orchestrator | Friday 02 January 2026 00:44:09 +0000 (0:00:00.215) 0:00:29.661 ******** 2026-01-02 00:44:16.845653 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:44:16.845665 | orchestrator | 2026-01-02 00:44:16.845679 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-02 00:44:16.845692 | orchestrator | Friday 02 January 2026 00:44:10 +0000 (0:00:00.242) 0:00:29.903 ******** 2026-01-02 00:44:16.845706 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:44:16.845719 | orchestrator | 2026-01-02 00:44:16.845732 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-02 00:44:16.845744 | orchestrator | Friday 02 January 2026 00:44:10 +0000 (0:00:00.201) 0:00:30.104 ******** 2026-01-02 00:44:16.845752 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:44:16.845760 | orchestrator | 2026-01-02 00:44:16.845768 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-02 00:44:16.845776 | orchestrator | Friday 02 January 2026 00:44:10 +0000 (0:00:00.231) 0:00:30.335 ******** 2026-01-02 00:44:16.845784 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:44:16.845792 | orchestrator | 2026-01-02 00:44:16.845800 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-02 00:44:16.845808 | orchestrator | Friday 02 January 2026 00:44:10 +0000 (0:00:00.229) 0:00:30.564 ******** 2026-01-02 00:44:16.845816 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-01-02 00:44:16.845824 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-01-02 00:44:16.845833 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-01-02 00:44:16.845841 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-01-02 00:44:16.845859 | orchestrator | 2026-01-02 00:44:16.845867 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-02 00:44:16.845875 | orchestrator | Friday 02 January 2026 00:44:11 +0000 (0:00:01.056) 0:00:31.621 ******** 2026-01-02 00:44:16.845883 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:44:16.845891 | orchestrator | 2026-01-02 00:44:16.845899 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-02 00:44:16.845907 | orchestrator | Friday 02 January 2026 00:44:12 +0000 (0:00:00.188) 0:00:31.809 ******** 2026-01-02 00:44:16.845915 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:44:16.845923 | orchestrator | 2026-01-02 00:44:16.845930 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-02 00:44:16.845938 | orchestrator | Friday 02 January 2026 00:44:12 +0000 (0:00:00.674) 0:00:32.483 ******** 2026-01-02 00:44:16.845946 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:44:16.845954 | orchestrator | 2026-01-02 00:44:16.845962 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-02 00:44:16.845970 | orchestrator | Friday 02 January 2026 00:44:13 +0000 (0:00:00.232) 0:00:32.716 ******** 2026-01-02 00:44:16.845978 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:44:16.845986 | orchestrator | 2026-01-02 00:44:16.845994 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-01-02 00:44:16.846007 | orchestrator | Friday 02 January 2026 00:44:13 +0000 (0:00:00.210) 0:00:32.926 ******** 2026-01-02 00:44:16.846094 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:44:16.846106 | orchestrator | 2026-01-02 00:44:16.846114 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-01-02 00:44:16.846122 | orchestrator | Friday 02 January 2026 00:44:13 +0000 (0:00:00.145) 0:00:33.071 ******** 2026-01-02 00:44:16.846130 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '319da19b-b53c-570d-92cc-c377bf830026'}}) 2026-01-02 00:44:16.846139 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'aabdb1ab-3cea-5cae-90fa-5f0cfaabc1a0'}}) 2026-01-02 00:44:16.846146 | orchestrator | 2026-01-02 00:44:16.846154 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-01-02 00:44:16.846162 | orchestrator | Friday 02 January 2026 00:44:13 +0000 (0:00:00.181) 0:00:33.253 ******** 2026-01-02 00:44:16.846171 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-319da19b-b53c-570d-92cc-c377bf830026', 'data_vg': 'ceph-319da19b-b53c-570d-92cc-c377bf830026'}) 2026-01-02 00:44:16.846180 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-aabdb1ab-3cea-5cae-90fa-5f0cfaabc1a0', 'data_vg': 'ceph-aabdb1ab-3cea-5cae-90fa-5f0cfaabc1a0'}) 2026-01-02 00:44:16.846188 | orchestrator | 2026-01-02 00:44:16.846196 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-01-02 00:44:16.846204 | orchestrator | Friday 02 January 2026 00:44:15 +0000 (0:00:01.808) 0:00:35.061 ******** 2026-01-02 00:44:16.846212 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-319da19b-b53c-570d-92cc-c377bf830026', 'data_vg': 'ceph-319da19b-b53c-570d-92cc-c377bf830026'})  2026-01-02 00:44:16.846221 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-aabdb1ab-3cea-5cae-90fa-5f0cfaabc1a0', 'data_vg': 'ceph-aabdb1ab-3cea-5cae-90fa-5f0cfaabc1a0'})  2026-01-02 00:44:16.846229 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:44:16.846236 | orchestrator | 2026-01-02 00:44:16.846244 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-01-02 00:44:16.846252 | orchestrator | Friday 02 January 2026 00:44:15 +0000 (0:00:00.157) 0:00:35.219 ******** 2026-01-02 00:44:16.846260 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-319da19b-b53c-570d-92cc-c377bf830026', 'data_vg': 'ceph-319da19b-b53c-570d-92cc-c377bf830026'}) 2026-01-02 00:44:16.846278 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-aabdb1ab-3cea-5cae-90fa-5f0cfaabc1a0', 'data_vg': 'ceph-aabdb1ab-3cea-5cae-90fa-5f0cfaabc1a0'}) 2026-01-02 00:44:21.954985 | orchestrator | 2026-01-02 00:44:21.955129 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-01-02 00:44:21.955149 | orchestrator | Friday 02 January 2026 00:44:16 +0000 (0:00:01.310) 0:00:36.530 ******** 2026-01-02 00:44:21.955161 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-319da19b-b53c-570d-92cc-c377bf830026', 'data_vg': 'ceph-319da19b-b53c-570d-92cc-c377bf830026'})  2026-01-02 00:44:21.955174 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-aabdb1ab-3cea-5cae-90fa-5f0cfaabc1a0', 'data_vg': 'ceph-aabdb1ab-3cea-5cae-90fa-5f0cfaabc1a0'})  2026-01-02 00:44:21.955186 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:44:21.955199 | orchestrator | 2026-01-02 00:44:21.955210 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-01-02 00:44:21.955221 | orchestrator | Friday 02 January 2026 00:44:16 +0000 (0:00:00.159) 0:00:36.689 ******** 2026-01-02 00:44:21.955232 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:44:21.955244 | orchestrator | 2026-01-02 00:44:21.955255 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-01-02 00:44:21.955266 | orchestrator | Friday 02 January 2026 00:44:17 +0000 (0:00:00.144) 0:00:36.833 ******** 2026-01-02 00:44:21.955277 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-319da19b-b53c-570d-92cc-c377bf830026', 'data_vg': 'ceph-319da19b-b53c-570d-92cc-c377bf830026'})  2026-01-02 00:44:21.955288 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-aabdb1ab-3cea-5cae-90fa-5f0cfaabc1a0', 'data_vg': 'ceph-aabdb1ab-3cea-5cae-90fa-5f0cfaabc1a0'})  2026-01-02 00:44:21.955299 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:44:21.955310 | orchestrator | 2026-01-02 00:44:21.955321 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-01-02 00:44:21.955332 | orchestrator | Friday 02 January 2026 00:44:17 +0000 (0:00:00.153) 0:00:36.987 ******** 2026-01-02 00:44:21.955343 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:44:21.955354 | orchestrator | 2026-01-02 00:44:21.955365 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-01-02 00:44:21.955376 | orchestrator | Friday 02 January 2026 00:44:17 +0000 (0:00:00.140) 0:00:37.127 ******** 2026-01-02 00:44:21.955387 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-319da19b-b53c-570d-92cc-c377bf830026', 'data_vg': 'ceph-319da19b-b53c-570d-92cc-c377bf830026'})  2026-01-02 00:44:21.955398 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-aabdb1ab-3cea-5cae-90fa-5f0cfaabc1a0', 'data_vg': 'ceph-aabdb1ab-3cea-5cae-90fa-5f0cfaabc1a0'})  2026-01-02 00:44:21.955409 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:44:21.955420 | orchestrator | 2026-01-02 00:44:21.955430 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-01-02 00:44:21.955458 | orchestrator | Friday 02 January 2026 00:44:17 +0000 (0:00:00.356) 0:00:37.484 ******** 2026-01-02 00:44:21.955470 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:44:21.955480 | orchestrator | 2026-01-02 00:44:21.955491 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-01-02 00:44:21.955502 | orchestrator | Friday 02 January 2026 00:44:17 +0000 (0:00:00.132) 0:00:37.617 ******** 2026-01-02 00:44:21.955514 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-319da19b-b53c-570d-92cc-c377bf830026', 'data_vg': 'ceph-319da19b-b53c-570d-92cc-c377bf830026'})  2026-01-02 00:44:21.955527 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-aabdb1ab-3cea-5cae-90fa-5f0cfaabc1a0', 'data_vg': 'ceph-aabdb1ab-3cea-5cae-90fa-5f0cfaabc1a0'})  2026-01-02 00:44:21.955541 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:44:21.955553 | orchestrator | 2026-01-02 00:44:21.955566 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-01-02 00:44:21.955579 | orchestrator | Friday 02 January 2026 00:44:18 +0000 (0:00:00.158) 0:00:37.775 ******** 2026-01-02 00:44:21.955592 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:44:21.955630 | orchestrator | 2026-01-02 00:44:21.955643 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-01-02 00:44:21.955654 | orchestrator | Friday 02 January 2026 00:44:18 +0000 (0:00:00.136) 0:00:37.911 ******** 2026-01-02 00:44:21.955665 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-319da19b-b53c-570d-92cc-c377bf830026', 'data_vg': 'ceph-319da19b-b53c-570d-92cc-c377bf830026'})  2026-01-02 00:44:21.955677 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-aabdb1ab-3cea-5cae-90fa-5f0cfaabc1a0', 'data_vg': 'ceph-aabdb1ab-3cea-5cae-90fa-5f0cfaabc1a0'})  2026-01-02 00:44:21.955688 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:44:21.955699 | orchestrator | 2026-01-02 00:44:21.955710 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-01-02 00:44:21.955720 | orchestrator | Friday 02 January 2026 00:44:18 +0000 (0:00:00.164) 0:00:38.076 ******** 2026-01-02 00:44:21.955731 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-319da19b-b53c-570d-92cc-c377bf830026', 'data_vg': 'ceph-319da19b-b53c-570d-92cc-c377bf830026'})  2026-01-02 00:44:21.955742 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-aabdb1ab-3cea-5cae-90fa-5f0cfaabc1a0', 'data_vg': 'ceph-aabdb1ab-3cea-5cae-90fa-5f0cfaabc1a0'})  2026-01-02 00:44:21.955753 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:44:21.955764 | orchestrator | 2026-01-02 00:44:21.955775 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-01-02 00:44:21.955804 | orchestrator | Friday 02 January 2026 00:44:18 +0000 (0:00:00.156) 0:00:38.233 ******** 2026-01-02 00:44:21.955816 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-319da19b-b53c-570d-92cc-c377bf830026', 'data_vg': 'ceph-319da19b-b53c-570d-92cc-c377bf830026'})  2026-01-02 00:44:21.955827 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-aabdb1ab-3cea-5cae-90fa-5f0cfaabc1a0', 'data_vg': 'ceph-aabdb1ab-3cea-5cae-90fa-5f0cfaabc1a0'})  2026-01-02 00:44:21.955838 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:44:21.955849 | orchestrator | 2026-01-02 00:44:21.955860 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-01-02 00:44:21.955871 | orchestrator | Friday 02 January 2026 00:44:18 +0000 (0:00:00.159) 0:00:38.392 ******** 2026-01-02 00:44:21.955882 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:44:21.955892 | orchestrator | 2026-01-02 00:44:21.955903 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-01-02 00:44:21.955914 | orchestrator | Friday 02 January 2026 00:44:18 +0000 (0:00:00.135) 0:00:38.527 ******** 2026-01-02 00:44:21.955925 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:44:21.955936 | orchestrator | 2026-01-02 00:44:21.955947 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-01-02 00:44:21.955958 | orchestrator | Friday 02 January 2026 00:44:18 +0000 (0:00:00.117) 0:00:38.645 ******** 2026-01-02 00:44:21.955968 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:44:21.955979 | orchestrator | 2026-01-02 00:44:21.955990 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-01-02 00:44:21.956001 | orchestrator | Friday 02 January 2026 00:44:19 +0000 (0:00:00.120) 0:00:38.766 ******** 2026-01-02 00:44:21.956012 | orchestrator | ok: [testbed-node-4] => { 2026-01-02 00:44:21.956023 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-01-02 00:44:21.956034 | orchestrator | } 2026-01-02 00:44:21.956045 | orchestrator | 2026-01-02 00:44:21.956109 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-01-02 00:44:21.956122 | orchestrator | Friday 02 January 2026 00:44:19 +0000 (0:00:00.108) 0:00:38.874 ******** 2026-01-02 00:44:21.956133 | orchestrator | ok: [testbed-node-4] => { 2026-01-02 00:44:21.956144 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-01-02 00:44:21.956155 | orchestrator | } 2026-01-02 00:44:21.956166 | orchestrator | 2026-01-02 00:44:21.956176 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-01-02 00:44:21.956187 | orchestrator | Friday 02 January 2026 00:44:19 +0000 (0:00:00.109) 0:00:38.983 ******** 2026-01-02 00:44:21.956207 | orchestrator | ok: [testbed-node-4] => { 2026-01-02 00:44:21.956219 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-01-02 00:44:21.956230 | orchestrator | } 2026-01-02 00:44:21.956241 | orchestrator | 2026-01-02 00:44:21.956252 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-01-02 00:44:21.956262 | orchestrator | Friday 02 January 2026 00:44:19 +0000 (0:00:00.247) 0:00:39.230 ******** 2026-01-02 00:44:21.956273 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:44:21.956284 | orchestrator | 2026-01-02 00:44:21.956296 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-01-02 00:44:21.956307 | orchestrator | Friday 02 January 2026 00:44:20 +0000 (0:00:00.482) 0:00:39.713 ******** 2026-01-02 00:44:21.956317 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:44:21.956328 | orchestrator | 2026-01-02 00:44:21.956339 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-01-02 00:44:21.956350 | orchestrator | Friday 02 January 2026 00:44:20 +0000 (0:00:00.503) 0:00:40.216 ******** 2026-01-02 00:44:21.956361 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:44:21.956372 | orchestrator | 2026-01-02 00:44:21.956383 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-01-02 00:44:21.956394 | orchestrator | Friday 02 January 2026 00:44:21 +0000 (0:00:00.500) 0:00:40.717 ******** 2026-01-02 00:44:21.956405 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:44:21.956416 | orchestrator | 2026-01-02 00:44:21.956427 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-01-02 00:44:21.956438 | orchestrator | Friday 02 January 2026 00:44:21 +0000 (0:00:00.128) 0:00:40.845 ******** 2026-01-02 00:44:21.956448 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:44:21.956459 | orchestrator | 2026-01-02 00:44:21.956479 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-01-02 00:44:21.956490 | orchestrator | Friday 02 January 2026 00:44:21 +0000 (0:00:00.096) 0:00:40.942 ******** 2026-01-02 00:44:21.956501 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:44:21.956512 | orchestrator | 2026-01-02 00:44:21.956523 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-01-02 00:44:21.956534 | orchestrator | Friday 02 January 2026 00:44:21 +0000 (0:00:00.103) 0:00:41.045 ******** 2026-01-02 00:44:21.956545 | orchestrator | ok: [testbed-node-4] => { 2026-01-02 00:44:21.956556 | orchestrator |  "vgs_report": { 2026-01-02 00:44:21.956568 | orchestrator |  "vg": [] 2026-01-02 00:44:21.956579 | orchestrator |  } 2026-01-02 00:44:21.956590 | orchestrator | } 2026-01-02 00:44:21.956601 | orchestrator | 2026-01-02 00:44:21.956612 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-01-02 00:44:21.956623 | orchestrator | Friday 02 January 2026 00:44:21 +0000 (0:00:00.124) 0:00:41.170 ******** 2026-01-02 00:44:21.956634 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:44:21.956645 | orchestrator | 2026-01-02 00:44:21.956656 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-01-02 00:44:21.956667 | orchestrator | Friday 02 January 2026 00:44:21 +0000 (0:00:00.129) 0:00:41.300 ******** 2026-01-02 00:44:21.956678 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:44:21.956689 | orchestrator | 2026-01-02 00:44:21.956700 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-01-02 00:44:21.956711 | orchestrator | Friday 02 January 2026 00:44:21 +0000 (0:00:00.123) 0:00:41.423 ******** 2026-01-02 00:44:21.956722 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:44:21.956733 | orchestrator | 2026-01-02 00:44:21.956743 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-01-02 00:44:21.956755 | orchestrator | Friday 02 January 2026 00:44:21 +0000 (0:00:00.109) 0:00:41.533 ******** 2026-01-02 00:44:21.956766 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:44:21.956777 | orchestrator | 2026-01-02 00:44:21.956795 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-01-02 00:44:25.937960 | orchestrator | Friday 02 January 2026 00:44:21 +0000 (0:00:00.111) 0:00:41.645 ******** 2026-01-02 00:44:25.938398 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:44:25.938458 | orchestrator | 2026-01-02 00:44:25.938465 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-01-02 00:44:25.938485 | orchestrator | Friday 02 January 2026 00:44:22 +0000 (0:00:00.229) 0:00:41.874 ******** 2026-01-02 00:44:25.938490 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:44:25.938494 | orchestrator | 2026-01-02 00:44:25.938499 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-01-02 00:44:25.938504 | orchestrator | Friday 02 January 2026 00:44:22 +0000 (0:00:00.109) 0:00:41.983 ******** 2026-01-02 00:44:25.938509 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:44:25.938513 | orchestrator | 2026-01-02 00:44:25.938518 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-01-02 00:44:25.938555 | orchestrator | Friday 02 January 2026 00:44:22 +0000 (0:00:00.124) 0:00:42.108 ******** 2026-01-02 00:44:25.938570 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:44:25.938589 | orchestrator | 2026-01-02 00:44:25.938617 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-01-02 00:44:25.938622 | orchestrator | Friday 02 January 2026 00:44:22 +0000 (0:00:00.123) 0:00:42.231 ******** 2026-01-02 00:44:25.938627 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:44:25.938644 | orchestrator | 2026-01-02 00:44:25.938664 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-01-02 00:44:25.938682 | orchestrator | Friday 02 January 2026 00:44:22 +0000 (0:00:00.122) 0:00:42.354 ******** 2026-01-02 00:44:25.938687 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:44:25.938714 | orchestrator | 2026-01-02 00:44:25.938719 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-01-02 00:44:25.938723 | orchestrator | Friday 02 January 2026 00:44:22 +0000 (0:00:00.125) 0:00:42.480 ******** 2026-01-02 00:44:25.938728 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:44:25.938733 | orchestrator | 2026-01-02 00:44:25.938737 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-01-02 00:44:25.938742 | orchestrator | Friday 02 January 2026 00:44:22 +0000 (0:00:00.115) 0:00:42.595 ******** 2026-01-02 00:44:25.938746 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:44:25.938750 | orchestrator | 2026-01-02 00:44:25.938754 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-01-02 00:44:25.938758 | orchestrator | Friday 02 January 2026 00:44:23 +0000 (0:00:00.129) 0:00:42.725 ******** 2026-01-02 00:44:25.938761 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:44:25.938765 | orchestrator | 2026-01-02 00:44:25.938769 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-01-02 00:44:25.938773 | orchestrator | Friday 02 January 2026 00:44:23 +0000 (0:00:00.108) 0:00:42.833 ******** 2026-01-02 00:44:25.938777 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:44:25.938780 | orchestrator | 2026-01-02 00:44:25.938785 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-01-02 00:44:25.938801 | orchestrator | Friday 02 January 2026 00:44:23 +0000 (0:00:00.111) 0:00:42.944 ******** 2026-01-02 00:44:25.938809 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-319da19b-b53c-570d-92cc-c377bf830026', 'data_vg': 'ceph-319da19b-b53c-570d-92cc-c377bf830026'})  2026-01-02 00:44:25.938818 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-aabdb1ab-3cea-5cae-90fa-5f0cfaabc1a0', 'data_vg': 'ceph-aabdb1ab-3cea-5cae-90fa-5f0cfaabc1a0'})  2026-01-02 00:44:25.938825 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:44:25.938832 | orchestrator | 2026-01-02 00:44:25.938838 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-01-02 00:44:25.938844 | orchestrator | Friday 02 January 2026 00:44:23 +0000 (0:00:00.128) 0:00:43.072 ******** 2026-01-02 00:44:25.938910 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-319da19b-b53c-570d-92cc-c377bf830026', 'data_vg': 'ceph-319da19b-b53c-570d-92cc-c377bf830026'})  2026-01-02 00:44:25.939596 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-aabdb1ab-3cea-5cae-90fa-5f0cfaabc1a0', 'data_vg': 'ceph-aabdb1ab-3cea-5cae-90fa-5f0cfaabc1a0'})  2026-01-02 00:44:25.939629 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:44:25.939638 | orchestrator | 2026-01-02 00:44:25.939646 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-01-02 00:44:25.939654 | orchestrator | Friday 02 January 2026 00:44:23 +0000 (0:00:00.124) 0:00:43.197 ******** 2026-01-02 00:44:25.939660 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-319da19b-b53c-570d-92cc-c377bf830026', 'data_vg': 'ceph-319da19b-b53c-570d-92cc-c377bf830026'})  2026-01-02 00:44:25.939667 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-aabdb1ab-3cea-5cae-90fa-5f0cfaabc1a0', 'data_vg': 'ceph-aabdb1ab-3cea-5cae-90fa-5f0cfaabc1a0'})  2026-01-02 00:44:25.939674 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:44:25.939680 | orchestrator | 2026-01-02 00:44:25.939687 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-01-02 00:44:25.939693 | orchestrator | Friday 02 January 2026 00:44:23 +0000 (0:00:00.124) 0:00:43.321 ******** 2026-01-02 00:44:25.939700 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-319da19b-b53c-570d-92cc-c377bf830026', 'data_vg': 'ceph-319da19b-b53c-570d-92cc-c377bf830026'})  2026-01-02 00:44:25.939706 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-aabdb1ab-3cea-5cae-90fa-5f0cfaabc1a0', 'data_vg': 'ceph-aabdb1ab-3cea-5cae-90fa-5f0cfaabc1a0'})  2026-01-02 00:44:25.939713 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:44:25.939719 | orchestrator | 2026-01-02 00:44:25.939744 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-01-02 00:44:25.939750 | orchestrator | Friday 02 January 2026 00:44:23 +0000 (0:00:00.244) 0:00:43.566 ******** 2026-01-02 00:44:25.939757 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-319da19b-b53c-570d-92cc-c377bf830026', 'data_vg': 'ceph-319da19b-b53c-570d-92cc-c377bf830026'})  2026-01-02 00:44:25.939763 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-aabdb1ab-3cea-5cae-90fa-5f0cfaabc1a0', 'data_vg': 'ceph-aabdb1ab-3cea-5cae-90fa-5f0cfaabc1a0'})  2026-01-02 00:44:25.939770 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:44:25.939776 | orchestrator | 2026-01-02 00:44:25.939781 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-01-02 00:44:25.939787 | orchestrator | Friday 02 January 2026 00:44:24 +0000 (0:00:00.140) 0:00:43.706 ******** 2026-01-02 00:44:25.939794 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-319da19b-b53c-570d-92cc-c377bf830026', 'data_vg': 'ceph-319da19b-b53c-570d-92cc-c377bf830026'})  2026-01-02 00:44:25.939800 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-aabdb1ab-3cea-5cae-90fa-5f0cfaabc1a0', 'data_vg': 'ceph-aabdb1ab-3cea-5cae-90fa-5f0cfaabc1a0'})  2026-01-02 00:44:25.939807 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:44:25.939813 | orchestrator | 2026-01-02 00:44:25.939819 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-01-02 00:44:25.939825 | orchestrator | Friday 02 January 2026 00:44:24 +0000 (0:00:00.141) 0:00:43.848 ******** 2026-01-02 00:44:25.939831 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-319da19b-b53c-570d-92cc-c377bf830026', 'data_vg': 'ceph-319da19b-b53c-570d-92cc-c377bf830026'})  2026-01-02 00:44:25.939838 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-aabdb1ab-3cea-5cae-90fa-5f0cfaabc1a0', 'data_vg': 'ceph-aabdb1ab-3cea-5cae-90fa-5f0cfaabc1a0'})  2026-01-02 00:44:25.939887 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:44:25.939922 | orchestrator | 2026-01-02 00:44:25.939959 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-01-02 00:44:25.940025 | orchestrator | Friday 02 January 2026 00:44:24 +0000 (0:00:00.133) 0:00:43.981 ******** 2026-01-02 00:44:25.940340 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-319da19b-b53c-570d-92cc-c377bf830026', 'data_vg': 'ceph-319da19b-b53c-570d-92cc-c377bf830026'})  2026-01-02 00:44:25.940561 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-aabdb1ab-3cea-5cae-90fa-5f0cfaabc1a0', 'data_vg': 'ceph-aabdb1ab-3cea-5cae-90fa-5f0cfaabc1a0'})  2026-01-02 00:44:25.940576 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:44:25.940652 | orchestrator | 2026-01-02 00:44:25.940691 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-01-02 00:44:25.940746 | orchestrator | Friday 02 January 2026 00:44:24 +0000 (0:00:00.135) 0:00:44.116 ******** 2026-01-02 00:44:25.940822 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:44:25.940861 | orchestrator | 2026-01-02 00:44:25.940890 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-01-02 00:44:25.940899 | orchestrator | Friday 02 January 2026 00:44:24 +0000 (0:00:00.494) 0:00:44.611 ******** 2026-01-02 00:44:25.940906 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:44:25.940913 | orchestrator | 2026-01-02 00:44:25.940920 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-01-02 00:44:25.940927 | orchestrator | Friday 02 January 2026 00:44:25 +0000 (0:00:00.463) 0:00:45.074 ******** 2026-01-02 00:44:25.940933 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:44:25.940940 | orchestrator | 2026-01-02 00:44:25.940947 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-01-02 00:44:25.940953 | orchestrator | Friday 02 January 2026 00:44:25 +0000 (0:00:00.121) 0:00:45.196 ******** 2026-01-02 00:44:25.940960 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-319da19b-b53c-570d-92cc-c377bf830026', 'vg_name': 'ceph-319da19b-b53c-570d-92cc-c377bf830026'}) 2026-01-02 00:44:25.940970 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-aabdb1ab-3cea-5cae-90fa-5f0cfaabc1a0', 'vg_name': 'ceph-aabdb1ab-3cea-5cae-90fa-5f0cfaabc1a0'}) 2026-01-02 00:44:25.940977 | orchestrator | 2026-01-02 00:44:25.940984 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-01-02 00:44:25.940991 | orchestrator | Friday 02 January 2026 00:44:25 +0000 (0:00:00.151) 0:00:45.348 ******** 2026-01-02 00:44:25.940997 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-319da19b-b53c-570d-92cc-c377bf830026', 'data_vg': 'ceph-319da19b-b53c-570d-92cc-c377bf830026'})  2026-01-02 00:44:25.941004 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-aabdb1ab-3cea-5cae-90fa-5f0cfaabc1a0', 'data_vg': 'ceph-aabdb1ab-3cea-5cae-90fa-5f0cfaabc1a0'})  2026-01-02 00:44:25.941010 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:44:25.941016 | orchestrator | 2026-01-02 00:44:25.941023 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-01-02 00:44:25.941030 | orchestrator | Friday 02 January 2026 00:44:25 +0000 (0:00:00.138) 0:00:45.486 ******** 2026-01-02 00:44:25.941037 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-319da19b-b53c-570d-92cc-c377bf830026', 'data_vg': 'ceph-319da19b-b53c-570d-92cc-c377bf830026'})  2026-01-02 00:44:25.941806 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-aabdb1ab-3cea-5cae-90fa-5f0cfaabc1a0', 'data_vg': 'ceph-aabdb1ab-3cea-5cae-90fa-5f0cfaabc1a0'})  2026-01-02 00:44:31.191573 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:44:31.191677 | orchestrator | 2026-01-02 00:44:31.191692 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-01-02 00:44:31.191705 | orchestrator | Friday 02 January 2026 00:44:25 +0000 (0:00:00.140) 0:00:45.627 ******** 2026-01-02 00:44:31.191716 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-319da19b-b53c-570d-92cc-c377bf830026', 'data_vg': 'ceph-319da19b-b53c-570d-92cc-c377bf830026'})  2026-01-02 00:44:31.191727 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-aabdb1ab-3cea-5cae-90fa-5f0cfaabc1a0', 'data_vg': 'ceph-aabdb1ab-3cea-5cae-90fa-5f0cfaabc1a0'})  2026-01-02 00:44:31.191738 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:44:31.191773 | orchestrator | 2026-01-02 00:44:31.191784 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-01-02 00:44:31.191794 | orchestrator | Friday 02 January 2026 00:44:26 +0000 (0:00:00.141) 0:00:45.768 ******** 2026-01-02 00:44:31.191805 | orchestrator | ok: [testbed-node-4] => { 2026-01-02 00:44:31.191815 | orchestrator |  "lvm_report": { 2026-01-02 00:44:31.191826 | orchestrator |  "lv": [ 2026-01-02 00:44:31.191835 | orchestrator |  { 2026-01-02 00:44:31.191845 | orchestrator |  "lv_name": "osd-block-319da19b-b53c-570d-92cc-c377bf830026", 2026-01-02 00:44:31.191856 | orchestrator |  "vg_name": "ceph-319da19b-b53c-570d-92cc-c377bf830026" 2026-01-02 00:44:31.191866 | orchestrator |  }, 2026-01-02 00:44:31.191876 | orchestrator |  { 2026-01-02 00:44:31.191886 | orchestrator |  "lv_name": "osd-block-aabdb1ab-3cea-5cae-90fa-5f0cfaabc1a0", 2026-01-02 00:44:31.191895 | orchestrator |  "vg_name": "ceph-aabdb1ab-3cea-5cae-90fa-5f0cfaabc1a0" 2026-01-02 00:44:31.191905 | orchestrator |  } 2026-01-02 00:44:31.191915 | orchestrator |  ], 2026-01-02 00:44:31.191925 | orchestrator |  "pv": [ 2026-01-02 00:44:31.191934 | orchestrator |  { 2026-01-02 00:44:31.191944 | orchestrator |  "pv_name": "/dev/sdb", 2026-01-02 00:44:31.191954 | orchestrator |  "vg_name": "ceph-319da19b-b53c-570d-92cc-c377bf830026" 2026-01-02 00:44:31.191964 | orchestrator |  }, 2026-01-02 00:44:31.191973 | orchestrator |  { 2026-01-02 00:44:31.191983 | orchestrator |  "pv_name": "/dev/sdc", 2026-01-02 00:44:31.191992 | orchestrator |  "vg_name": "ceph-aabdb1ab-3cea-5cae-90fa-5f0cfaabc1a0" 2026-01-02 00:44:31.192002 | orchestrator |  } 2026-01-02 00:44:31.192012 | orchestrator |  ] 2026-01-02 00:44:31.192021 | orchestrator |  } 2026-01-02 00:44:31.192031 | orchestrator | } 2026-01-02 00:44:31.192042 | orchestrator | 2026-01-02 00:44:31.192109 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-01-02 00:44:31.192123 | orchestrator | 2026-01-02 00:44:31.192135 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-01-02 00:44:31.192147 | orchestrator | Friday 02 January 2026 00:44:26 +0000 (0:00:00.374) 0:00:46.142 ******** 2026-01-02 00:44:31.192158 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-01-02 00:44:31.192169 | orchestrator | 2026-01-02 00:44:31.192181 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-01-02 00:44:31.192193 | orchestrator | Friday 02 January 2026 00:44:26 +0000 (0:00:00.238) 0:00:46.381 ******** 2026-01-02 00:44:31.192204 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:44:31.192215 | orchestrator | 2026-01-02 00:44:31.192227 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-02 00:44:31.192238 | orchestrator | Friday 02 January 2026 00:44:26 +0000 (0:00:00.215) 0:00:46.596 ******** 2026-01-02 00:44:31.192248 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-01-02 00:44:31.192259 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-01-02 00:44:31.192270 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-01-02 00:44:31.192281 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-01-02 00:44:31.192292 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-01-02 00:44:31.192302 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-01-02 00:44:31.192313 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-01-02 00:44:31.192323 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-01-02 00:44:31.192334 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-01-02 00:44:31.192353 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-01-02 00:44:31.192364 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-01-02 00:44:31.192376 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-01-02 00:44:31.192387 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-01-02 00:44:31.192398 | orchestrator | 2026-01-02 00:44:31.192413 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-02 00:44:31.192426 | orchestrator | Friday 02 January 2026 00:44:27 +0000 (0:00:00.357) 0:00:46.954 ******** 2026-01-02 00:44:31.192437 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:44:31.192447 | orchestrator | 2026-01-02 00:44:31.192459 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-02 00:44:31.192471 | orchestrator | Friday 02 January 2026 00:44:27 +0000 (0:00:00.176) 0:00:47.130 ******** 2026-01-02 00:44:31.192483 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:44:31.192493 | orchestrator | 2026-01-02 00:44:31.192503 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-02 00:44:31.192530 | orchestrator | Friday 02 January 2026 00:44:27 +0000 (0:00:00.160) 0:00:47.291 ******** 2026-01-02 00:44:31.192541 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:44:31.192551 | orchestrator | 2026-01-02 00:44:31.192560 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-02 00:44:31.192570 | orchestrator | Friday 02 January 2026 00:44:27 +0000 (0:00:00.195) 0:00:47.487 ******** 2026-01-02 00:44:31.192580 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:44:31.192589 | orchestrator | 2026-01-02 00:44:31.192599 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-02 00:44:31.192653 | orchestrator | Friday 02 January 2026 00:44:27 +0000 (0:00:00.162) 0:00:47.649 ******** 2026-01-02 00:44:31.192664 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:44:31.192674 | orchestrator | 2026-01-02 00:44:31.192684 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-02 00:44:31.192693 | orchestrator | Friday 02 January 2026 00:44:28 +0000 (0:00:00.180) 0:00:47.830 ******** 2026-01-02 00:44:31.192703 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:44:31.192713 | orchestrator | 2026-01-02 00:44:31.192722 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-02 00:44:31.192732 | orchestrator | Friday 02 January 2026 00:44:28 +0000 (0:00:00.414) 0:00:48.245 ******** 2026-01-02 00:44:31.192742 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:44:31.192751 | orchestrator | 2026-01-02 00:44:31.192761 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-02 00:44:31.192770 | orchestrator | Friday 02 January 2026 00:44:28 +0000 (0:00:00.176) 0:00:48.421 ******** 2026-01-02 00:44:31.192780 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:44:31.192790 | orchestrator | 2026-01-02 00:44:31.192799 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-02 00:44:31.192809 | orchestrator | Friday 02 January 2026 00:44:28 +0000 (0:00:00.176) 0:00:48.598 ******** 2026-01-02 00:44:31.192819 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_551f95f5-89d5-4c4c-89d2-f5559c6efa3b) 2026-01-02 00:44:31.192831 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_551f95f5-89d5-4c4c-89d2-f5559c6efa3b) 2026-01-02 00:44:31.192841 | orchestrator | 2026-01-02 00:44:31.192850 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-02 00:44:31.192860 | orchestrator | Friday 02 January 2026 00:44:29 +0000 (0:00:00.380) 0:00:48.978 ******** 2026-01-02 00:44:31.192870 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_26e4f97c-d63e-4b12-851b-95c853c7feee) 2026-01-02 00:44:31.192879 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_26e4f97c-d63e-4b12-851b-95c853c7feee) 2026-01-02 00:44:31.192889 | orchestrator | 2026-01-02 00:44:31.192919 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-02 00:44:31.192934 | orchestrator | Friday 02 January 2026 00:44:29 +0000 (0:00:00.446) 0:00:49.425 ******** 2026-01-02 00:44:31.192944 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_afdcae1f-177b-4712-b40b-94f97a828de8) 2026-01-02 00:44:31.192953 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_afdcae1f-177b-4712-b40b-94f97a828de8) 2026-01-02 00:44:31.192963 | orchestrator | 2026-01-02 00:44:31.192973 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-02 00:44:31.192983 | orchestrator | Friday 02 January 2026 00:44:30 +0000 (0:00:00.370) 0:00:49.795 ******** 2026-01-02 00:44:31.192992 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_ec88668e-8f3f-41b2-9ae8-d99cc1bb8c9e) 2026-01-02 00:44:31.193002 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_ec88668e-8f3f-41b2-9ae8-d99cc1bb8c9e) 2026-01-02 00:44:31.193012 | orchestrator | 2026-01-02 00:44:31.193022 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-02 00:44:31.193031 | orchestrator | Friday 02 January 2026 00:44:30 +0000 (0:00:00.378) 0:00:50.173 ******** 2026-01-02 00:44:31.193041 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-01-02 00:44:31.193112 | orchestrator | 2026-01-02 00:44:31.193125 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-02 00:44:31.193134 | orchestrator | Friday 02 January 2026 00:44:30 +0000 (0:00:00.316) 0:00:50.490 ******** 2026-01-02 00:44:31.193144 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-01-02 00:44:31.193154 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-01-02 00:44:31.193163 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-01-02 00:44:31.193173 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-01-02 00:44:31.193182 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-01-02 00:44:31.193192 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-01-02 00:44:31.193202 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-01-02 00:44:31.193211 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-01-02 00:44:31.193221 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-01-02 00:44:31.193230 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-01-02 00:44:31.193240 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-01-02 00:44:31.193257 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-01-02 00:44:39.395994 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-01-02 00:44:39.396134 | orchestrator | 2026-01-02 00:44:39.396147 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-02 00:44:39.396154 | orchestrator | Friday 02 January 2026 00:44:31 +0000 (0:00:00.377) 0:00:50.867 ******** 2026-01-02 00:44:39.396161 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:44:39.396169 | orchestrator | 2026-01-02 00:44:39.396176 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-02 00:44:39.396184 | orchestrator | Friday 02 January 2026 00:44:31 +0000 (0:00:00.186) 0:00:51.054 ******** 2026-01-02 00:44:39.396191 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:44:39.396198 | orchestrator | 2026-01-02 00:44:39.396205 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-02 00:44:39.396212 | orchestrator | Friday 02 January 2026 00:44:31 +0000 (0:00:00.514) 0:00:51.569 ******** 2026-01-02 00:44:39.396243 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:44:39.396254 | orchestrator | 2026-01-02 00:44:39.396265 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-02 00:44:39.396276 | orchestrator | Friday 02 January 2026 00:44:32 +0000 (0:00:00.181) 0:00:51.750 ******** 2026-01-02 00:44:39.396286 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:44:39.396292 | orchestrator | 2026-01-02 00:44:39.396299 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-02 00:44:39.396309 | orchestrator | Friday 02 January 2026 00:44:32 +0000 (0:00:00.206) 0:00:51.957 ******** 2026-01-02 00:44:39.396320 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:44:39.396331 | orchestrator | 2026-01-02 00:44:39.396339 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-02 00:44:39.396346 | orchestrator | Friday 02 January 2026 00:44:32 +0000 (0:00:00.193) 0:00:52.151 ******** 2026-01-02 00:44:39.396353 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:44:39.396360 | orchestrator | 2026-01-02 00:44:39.396367 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-02 00:44:39.396374 | orchestrator | Friday 02 January 2026 00:44:32 +0000 (0:00:00.188) 0:00:52.340 ******** 2026-01-02 00:44:39.396381 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:44:39.396388 | orchestrator | 2026-01-02 00:44:39.396395 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-02 00:44:39.396402 | orchestrator | Friday 02 January 2026 00:44:32 +0000 (0:00:00.185) 0:00:52.526 ******** 2026-01-02 00:44:39.396409 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:44:39.396416 | orchestrator | 2026-01-02 00:44:39.396423 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-02 00:44:39.396430 | orchestrator | Friday 02 January 2026 00:44:33 +0000 (0:00:00.242) 0:00:52.769 ******** 2026-01-02 00:44:39.396450 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-01-02 00:44:39.396460 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-01-02 00:44:39.396467 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-01-02 00:44:39.396475 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-01-02 00:44:39.396483 | orchestrator | 2026-01-02 00:44:39.396490 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-02 00:44:39.396498 | orchestrator | Friday 02 January 2026 00:44:33 +0000 (0:00:00.548) 0:00:53.318 ******** 2026-01-02 00:44:39.396506 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:44:39.396513 | orchestrator | 2026-01-02 00:44:39.396521 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-02 00:44:39.396529 | orchestrator | Friday 02 January 2026 00:44:33 +0000 (0:00:00.168) 0:00:53.487 ******** 2026-01-02 00:44:39.396536 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:44:39.396544 | orchestrator | 2026-01-02 00:44:39.396552 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-02 00:44:39.396559 | orchestrator | Friday 02 January 2026 00:44:33 +0000 (0:00:00.155) 0:00:53.642 ******** 2026-01-02 00:44:39.396567 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:44:39.396575 | orchestrator | 2026-01-02 00:44:39.396583 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-02 00:44:39.396591 | orchestrator | Friday 02 January 2026 00:44:34 +0000 (0:00:00.164) 0:00:53.807 ******** 2026-01-02 00:44:39.396599 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:44:39.396607 | orchestrator | 2026-01-02 00:44:39.396614 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-01-02 00:44:39.396622 | orchestrator | Friday 02 January 2026 00:44:34 +0000 (0:00:00.215) 0:00:54.022 ******** 2026-01-02 00:44:39.396629 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:44:39.396637 | orchestrator | 2026-01-02 00:44:39.396645 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-01-02 00:44:39.396653 | orchestrator | Friday 02 January 2026 00:44:34 +0000 (0:00:00.237) 0:00:54.260 ******** 2026-01-02 00:44:39.396661 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '804dd052-7dd8-5ffa-9f76-70ebd20e36f7'}}) 2026-01-02 00:44:39.396676 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '8699efe3-2ea7-5359-bcef-4eac218b02a9'}}) 2026-01-02 00:44:39.396683 | orchestrator | 2026-01-02 00:44:39.396690 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-01-02 00:44:39.396696 | orchestrator | Friday 02 January 2026 00:44:34 +0000 (0:00:00.179) 0:00:54.440 ******** 2026-01-02 00:44:39.396704 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-804dd052-7dd8-5ffa-9f76-70ebd20e36f7', 'data_vg': 'ceph-804dd052-7dd8-5ffa-9f76-70ebd20e36f7'}) 2026-01-02 00:44:39.396712 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-8699efe3-2ea7-5359-bcef-4eac218b02a9', 'data_vg': 'ceph-8699efe3-2ea7-5359-bcef-4eac218b02a9'}) 2026-01-02 00:44:39.396719 | orchestrator | 2026-01-02 00:44:39.396726 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-01-02 00:44:39.396748 | orchestrator | Friday 02 January 2026 00:44:36 +0000 (0:00:01.797) 0:00:56.237 ******** 2026-01-02 00:44:39.396756 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-804dd052-7dd8-5ffa-9f76-70ebd20e36f7', 'data_vg': 'ceph-804dd052-7dd8-5ffa-9f76-70ebd20e36f7'})  2026-01-02 00:44:39.396765 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8699efe3-2ea7-5359-bcef-4eac218b02a9', 'data_vg': 'ceph-8699efe3-2ea7-5359-bcef-4eac218b02a9'})  2026-01-02 00:44:39.396772 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:44:39.396779 | orchestrator | 2026-01-02 00:44:39.396787 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-01-02 00:44:39.396795 | orchestrator | Friday 02 January 2026 00:44:36 +0000 (0:00:00.125) 0:00:56.363 ******** 2026-01-02 00:44:39.396802 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-804dd052-7dd8-5ffa-9f76-70ebd20e36f7', 'data_vg': 'ceph-804dd052-7dd8-5ffa-9f76-70ebd20e36f7'}) 2026-01-02 00:44:39.396810 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-8699efe3-2ea7-5359-bcef-4eac218b02a9', 'data_vg': 'ceph-8699efe3-2ea7-5359-bcef-4eac218b02a9'}) 2026-01-02 00:44:39.396817 | orchestrator | 2026-01-02 00:44:39.396823 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-01-02 00:44:39.396830 | orchestrator | Friday 02 January 2026 00:44:37 +0000 (0:00:01.284) 0:00:57.647 ******** 2026-01-02 00:44:39.396837 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-804dd052-7dd8-5ffa-9f76-70ebd20e36f7', 'data_vg': 'ceph-804dd052-7dd8-5ffa-9f76-70ebd20e36f7'})  2026-01-02 00:44:39.396844 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8699efe3-2ea7-5359-bcef-4eac218b02a9', 'data_vg': 'ceph-8699efe3-2ea7-5359-bcef-4eac218b02a9'})  2026-01-02 00:44:39.396851 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:44:39.396857 | orchestrator | 2026-01-02 00:44:39.396864 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-01-02 00:44:39.396871 | orchestrator | Friday 02 January 2026 00:44:38 +0000 (0:00:00.137) 0:00:57.785 ******** 2026-01-02 00:44:39.396877 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:44:39.396884 | orchestrator | 2026-01-02 00:44:39.396891 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-01-02 00:44:39.396899 | orchestrator | Friday 02 January 2026 00:44:38 +0000 (0:00:00.121) 0:00:57.906 ******** 2026-01-02 00:44:39.396911 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-804dd052-7dd8-5ffa-9f76-70ebd20e36f7', 'data_vg': 'ceph-804dd052-7dd8-5ffa-9f76-70ebd20e36f7'})  2026-01-02 00:44:39.396918 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8699efe3-2ea7-5359-bcef-4eac218b02a9', 'data_vg': 'ceph-8699efe3-2ea7-5359-bcef-4eac218b02a9'})  2026-01-02 00:44:39.396925 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:44:39.396932 | orchestrator | 2026-01-02 00:44:39.396939 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-01-02 00:44:39.396952 | orchestrator | Friday 02 January 2026 00:44:38 +0000 (0:00:00.143) 0:00:58.050 ******** 2026-01-02 00:44:39.396961 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:44:39.396972 | orchestrator | 2026-01-02 00:44:39.396982 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-01-02 00:44:39.396990 | orchestrator | Friday 02 January 2026 00:44:38 +0000 (0:00:00.127) 0:00:58.177 ******** 2026-01-02 00:44:39.396997 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-804dd052-7dd8-5ffa-9f76-70ebd20e36f7', 'data_vg': 'ceph-804dd052-7dd8-5ffa-9f76-70ebd20e36f7'})  2026-01-02 00:44:39.397005 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8699efe3-2ea7-5359-bcef-4eac218b02a9', 'data_vg': 'ceph-8699efe3-2ea7-5359-bcef-4eac218b02a9'})  2026-01-02 00:44:39.397014 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:44:39.397023 | orchestrator | 2026-01-02 00:44:39.397032 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-01-02 00:44:39.397040 | orchestrator | Friday 02 January 2026 00:44:38 +0000 (0:00:00.156) 0:00:58.333 ******** 2026-01-02 00:44:39.397064 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:44:39.397071 | orchestrator | 2026-01-02 00:44:39.397079 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-01-02 00:44:39.397086 | orchestrator | Friday 02 January 2026 00:44:38 +0000 (0:00:00.139) 0:00:58.473 ******** 2026-01-02 00:44:39.397093 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-804dd052-7dd8-5ffa-9f76-70ebd20e36f7', 'data_vg': 'ceph-804dd052-7dd8-5ffa-9f76-70ebd20e36f7'})  2026-01-02 00:44:39.397100 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8699efe3-2ea7-5359-bcef-4eac218b02a9', 'data_vg': 'ceph-8699efe3-2ea7-5359-bcef-4eac218b02a9'})  2026-01-02 00:44:39.397107 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:44:39.397114 | orchestrator | 2026-01-02 00:44:39.397120 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-01-02 00:44:39.397127 | orchestrator | Friday 02 January 2026 00:44:38 +0000 (0:00:00.155) 0:00:58.628 ******** 2026-01-02 00:44:39.397135 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:44:39.397143 | orchestrator | 2026-01-02 00:44:39.397150 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-01-02 00:44:39.397157 | orchestrator | Friday 02 January 2026 00:44:39 +0000 (0:00:00.302) 0:00:58.931 ******** 2026-01-02 00:44:39.397173 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-804dd052-7dd8-5ffa-9f76-70ebd20e36f7', 'data_vg': 'ceph-804dd052-7dd8-5ffa-9f76-70ebd20e36f7'})  2026-01-02 00:44:45.464658 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8699efe3-2ea7-5359-bcef-4eac218b02a9', 'data_vg': 'ceph-8699efe3-2ea7-5359-bcef-4eac218b02a9'})  2026-01-02 00:44:45.464738 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:44:45.464746 | orchestrator | 2026-01-02 00:44:45.464752 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-01-02 00:44:45.464759 | orchestrator | Friday 02 January 2026 00:44:39 +0000 (0:00:00.155) 0:00:59.086 ******** 2026-01-02 00:44:45.464764 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-804dd052-7dd8-5ffa-9f76-70ebd20e36f7', 'data_vg': 'ceph-804dd052-7dd8-5ffa-9f76-70ebd20e36f7'})  2026-01-02 00:44:45.464770 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8699efe3-2ea7-5359-bcef-4eac218b02a9', 'data_vg': 'ceph-8699efe3-2ea7-5359-bcef-4eac218b02a9'})  2026-01-02 00:44:45.464775 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:44:45.464779 | orchestrator | 2026-01-02 00:44:45.464784 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-01-02 00:44:45.464789 | orchestrator | Friday 02 January 2026 00:44:39 +0000 (0:00:00.180) 0:00:59.267 ******** 2026-01-02 00:44:45.464793 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-804dd052-7dd8-5ffa-9f76-70ebd20e36f7', 'data_vg': 'ceph-804dd052-7dd8-5ffa-9f76-70ebd20e36f7'})  2026-01-02 00:44:45.464798 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8699efe3-2ea7-5359-bcef-4eac218b02a9', 'data_vg': 'ceph-8699efe3-2ea7-5359-bcef-4eac218b02a9'})  2026-01-02 00:44:45.464819 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:44:45.464824 | orchestrator | 2026-01-02 00:44:45.464829 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-01-02 00:44:45.464833 | orchestrator | Friday 02 January 2026 00:44:39 +0000 (0:00:00.139) 0:00:59.407 ******** 2026-01-02 00:44:45.464838 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:44:45.464843 | orchestrator | 2026-01-02 00:44:45.464847 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-01-02 00:44:45.464852 | orchestrator | Friday 02 January 2026 00:44:39 +0000 (0:00:00.143) 0:00:59.551 ******** 2026-01-02 00:44:45.464856 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:44:45.464861 | orchestrator | 2026-01-02 00:44:45.464865 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-01-02 00:44:45.464870 | orchestrator | Friday 02 January 2026 00:44:39 +0000 (0:00:00.139) 0:00:59.690 ******** 2026-01-02 00:44:45.464874 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:44:45.464879 | orchestrator | 2026-01-02 00:44:45.464884 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-01-02 00:44:45.464888 | orchestrator | Friday 02 January 2026 00:44:40 +0000 (0:00:00.146) 0:00:59.836 ******** 2026-01-02 00:44:45.464893 | orchestrator | ok: [testbed-node-5] => { 2026-01-02 00:44:45.464898 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-01-02 00:44:45.464903 | orchestrator | } 2026-01-02 00:44:45.464908 | orchestrator | 2026-01-02 00:44:45.464913 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-01-02 00:44:45.464918 | orchestrator | Friday 02 January 2026 00:44:40 +0000 (0:00:00.152) 0:00:59.989 ******** 2026-01-02 00:44:45.464922 | orchestrator | ok: [testbed-node-5] => { 2026-01-02 00:44:45.464927 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-01-02 00:44:45.464932 | orchestrator | } 2026-01-02 00:44:45.464936 | orchestrator | 2026-01-02 00:44:45.464941 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-01-02 00:44:45.464945 | orchestrator | Friday 02 January 2026 00:44:40 +0000 (0:00:00.154) 0:01:00.143 ******** 2026-01-02 00:44:45.464950 | orchestrator | ok: [testbed-node-5] => { 2026-01-02 00:44:45.464955 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-01-02 00:44:45.464959 | orchestrator | } 2026-01-02 00:44:45.464964 | orchestrator | 2026-01-02 00:44:45.464968 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-01-02 00:44:45.464973 | orchestrator | Friday 02 January 2026 00:44:40 +0000 (0:00:00.161) 0:01:00.305 ******** 2026-01-02 00:44:45.464978 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:44:45.464982 | orchestrator | 2026-01-02 00:44:45.464987 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-01-02 00:44:45.464991 | orchestrator | Friday 02 January 2026 00:44:41 +0000 (0:00:00.515) 0:01:00.821 ******** 2026-01-02 00:44:45.464996 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:44:45.465001 | orchestrator | 2026-01-02 00:44:45.465005 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-01-02 00:44:45.465010 | orchestrator | Friday 02 January 2026 00:44:41 +0000 (0:00:00.538) 0:01:01.359 ******** 2026-01-02 00:44:45.465014 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:44:45.465019 | orchestrator | 2026-01-02 00:44:45.465023 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-01-02 00:44:45.465028 | orchestrator | Friday 02 January 2026 00:44:42 +0000 (0:00:00.709) 0:01:02.069 ******** 2026-01-02 00:44:45.465032 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:44:45.465037 | orchestrator | 2026-01-02 00:44:45.465041 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-01-02 00:44:45.465073 | orchestrator | Friday 02 January 2026 00:44:42 +0000 (0:00:00.145) 0:01:02.214 ******** 2026-01-02 00:44:45.465081 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:44:45.465089 | orchestrator | 2026-01-02 00:44:45.465097 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-01-02 00:44:45.465107 | orchestrator | Friday 02 January 2026 00:44:42 +0000 (0:00:00.107) 0:01:02.322 ******** 2026-01-02 00:44:45.465112 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:44:45.465117 | orchestrator | 2026-01-02 00:44:45.465121 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-01-02 00:44:45.465139 | orchestrator | Friday 02 January 2026 00:44:42 +0000 (0:00:00.096) 0:01:02.419 ******** 2026-01-02 00:44:45.465144 | orchestrator | ok: [testbed-node-5] => { 2026-01-02 00:44:45.465149 | orchestrator |  "vgs_report": { 2026-01-02 00:44:45.465153 | orchestrator |  "vg": [] 2026-01-02 00:44:45.465168 | orchestrator |  } 2026-01-02 00:44:45.465173 | orchestrator | } 2026-01-02 00:44:45.465178 | orchestrator | 2026-01-02 00:44:45.465182 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-01-02 00:44:45.465187 | orchestrator | Friday 02 January 2026 00:44:42 +0000 (0:00:00.136) 0:01:02.555 ******** 2026-01-02 00:44:45.465192 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:44:45.465196 | orchestrator | 2026-01-02 00:44:45.465201 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-01-02 00:44:45.465205 | orchestrator | Friday 02 January 2026 00:44:42 +0000 (0:00:00.123) 0:01:02.679 ******** 2026-01-02 00:44:45.465210 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:44:45.465215 | orchestrator | 2026-01-02 00:44:45.465219 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-01-02 00:44:45.465224 | orchestrator | Friday 02 January 2026 00:44:43 +0000 (0:00:00.123) 0:01:02.803 ******** 2026-01-02 00:44:45.465229 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:44:45.465235 | orchestrator | 2026-01-02 00:44:45.465240 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-01-02 00:44:45.465245 | orchestrator | Friday 02 January 2026 00:44:43 +0000 (0:00:00.133) 0:01:02.936 ******** 2026-01-02 00:44:45.465251 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:44:45.465256 | orchestrator | 2026-01-02 00:44:45.465262 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-01-02 00:44:45.465266 | orchestrator | Friday 02 January 2026 00:44:43 +0000 (0:00:00.129) 0:01:03.066 ******** 2026-01-02 00:44:45.465272 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:44:45.465277 | orchestrator | 2026-01-02 00:44:45.465282 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-01-02 00:44:45.465287 | orchestrator | Friday 02 January 2026 00:44:43 +0000 (0:00:00.136) 0:01:03.202 ******** 2026-01-02 00:44:45.465293 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:44:45.465298 | orchestrator | 2026-01-02 00:44:45.465303 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-01-02 00:44:45.465308 | orchestrator | Friday 02 January 2026 00:44:43 +0000 (0:00:00.124) 0:01:03.326 ******** 2026-01-02 00:44:45.465313 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:44:45.465318 | orchestrator | 2026-01-02 00:44:45.465324 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-01-02 00:44:45.465329 | orchestrator | Friday 02 January 2026 00:44:43 +0000 (0:00:00.129) 0:01:03.456 ******** 2026-01-02 00:44:45.465334 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:44:45.465339 | orchestrator | 2026-01-02 00:44:45.465344 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-01-02 00:44:45.465349 | orchestrator | Friday 02 January 2026 00:44:44 +0000 (0:00:00.390) 0:01:03.846 ******** 2026-01-02 00:44:45.465354 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:44:45.465360 | orchestrator | 2026-01-02 00:44:45.465368 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-01-02 00:44:45.465373 | orchestrator | Friday 02 January 2026 00:44:44 +0000 (0:00:00.137) 0:01:03.984 ******** 2026-01-02 00:44:45.465378 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:44:45.465384 | orchestrator | 2026-01-02 00:44:45.465389 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-01-02 00:44:45.465398 | orchestrator | Friday 02 January 2026 00:44:44 +0000 (0:00:00.134) 0:01:04.119 ******** 2026-01-02 00:44:45.465403 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:44:45.465408 | orchestrator | 2026-01-02 00:44:45.465414 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-01-02 00:44:45.465419 | orchestrator | Friday 02 January 2026 00:44:44 +0000 (0:00:00.132) 0:01:04.252 ******** 2026-01-02 00:44:45.465424 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:44:45.465430 | orchestrator | 2026-01-02 00:44:45.465435 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-01-02 00:44:45.465440 | orchestrator | Friday 02 January 2026 00:44:44 +0000 (0:00:00.134) 0:01:04.386 ******** 2026-01-02 00:44:45.465445 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:44:45.465450 | orchestrator | 2026-01-02 00:44:45.465455 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-01-02 00:44:45.465461 | orchestrator | Friday 02 January 2026 00:44:44 +0000 (0:00:00.138) 0:01:04.525 ******** 2026-01-02 00:44:45.465465 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:44:45.465470 | orchestrator | 2026-01-02 00:44:45.465476 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-01-02 00:44:45.465481 | orchestrator | Friday 02 January 2026 00:44:44 +0000 (0:00:00.146) 0:01:04.672 ******** 2026-01-02 00:44:45.465486 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-804dd052-7dd8-5ffa-9f76-70ebd20e36f7', 'data_vg': 'ceph-804dd052-7dd8-5ffa-9f76-70ebd20e36f7'})  2026-01-02 00:44:45.465491 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8699efe3-2ea7-5359-bcef-4eac218b02a9', 'data_vg': 'ceph-8699efe3-2ea7-5359-bcef-4eac218b02a9'})  2026-01-02 00:44:45.465497 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:44:45.465502 | orchestrator | 2026-01-02 00:44:45.465507 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-01-02 00:44:45.465512 | orchestrator | Friday 02 January 2026 00:44:45 +0000 (0:00:00.158) 0:01:04.831 ******** 2026-01-02 00:44:45.465517 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-804dd052-7dd8-5ffa-9f76-70ebd20e36f7', 'data_vg': 'ceph-804dd052-7dd8-5ffa-9f76-70ebd20e36f7'})  2026-01-02 00:44:45.465523 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8699efe3-2ea7-5359-bcef-4eac218b02a9', 'data_vg': 'ceph-8699efe3-2ea7-5359-bcef-4eac218b02a9'})  2026-01-02 00:44:45.465528 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:44:45.465533 | orchestrator | 2026-01-02 00:44:45.465538 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-01-02 00:44:45.465543 | orchestrator | Friday 02 January 2026 00:44:45 +0000 (0:00:00.155) 0:01:04.986 ******** 2026-01-02 00:44:45.465553 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-804dd052-7dd8-5ffa-9f76-70ebd20e36f7', 'data_vg': 'ceph-804dd052-7dd8-5ffa-9f76-70ebd20e36f7'})  2026-01-02 00:44:48.460292 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8699efe3-2ea7-5359-bcef-4eac218b02a9', 'data_vg': 'ceph-8699efe3-2ea7-5359-bcef-4eac218b02a9'})  2026-01-02 00:44:48.460425 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:44:48.460444 | orchestrator | 2026-01-02 00:44:48.460457 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-01-02 00:44:48.460471 | orchestrator | Friday 02 January 2026 00:44:45 +0000 (0:00:00.168) 0:01:05.155 ******** 2026-01-02 00:44:48.460482 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-804dd052-7dd8-5ffa-9f76-70ebd20e36f7', 'data_vg': 'ceph-804dd052-7dd8-5ffa-9f76-70ebd20e36f7'})  2026-01-02 00:44:48.460493 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8699efe3-2ea7-5359-bcef-4eac218b02a9', 'data_vg': 'ceph-8699efe3-2ea7-5359-bcef-4eac218b02a9'})  2026-01-02 00:44:48.460504 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:44:48.460515 | orchestrator | 2026-01-02 00:44:48.460526 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-01-02 00:44:48.460566 | orchestrator | Friday 02 January 2026 00:44:45 +0000 (0:00:00.156) 0:01:05.311 ******** 2026-01-02 00:44:48.460578 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-804dd052-7dd8-5ffa-9f76-70ebd20e36f7', 'data_vg': 'ceph-804dd052-7dd8-5ffa-9f76-70ebd20e36f7'})  2026-01-02 00:44:48.460589 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8699efe3-2ea7-5359-bcef-4eac218b02a9', 'data_vg': 'ceph-8699efe3-2ea7-5359-bcef-4eac218b02a9'})  2026-01-02 00:44:48.460600 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:44:48.460610 | orchestrator | 2026-01-02 00:44:48.460621 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-01-02 00:44:48.460632 | orchestrator | Friday 02 January 2026 00:44:45 +0000 (0:00:00.165) 0:01:05.476 ******** 2026-01-02 00:44:48.460643 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-804dd052-7dd8-5ffa-9f76-70ebd20e36f7', 'data_vg': 'ceph-804dd052-7dd8-5ffa-9f76-70ebd20e36f7'})  2026-01-02 00:44:48.460668 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8699efe3-2ea7-5359-bcef-4eac218b02a9', 'data_vg': 'ceph-8699efe3-2ea7-5359-bcef-4eac218b02a9'})  2026-01-02 00:44:48.460679 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:44:48.460689 | orchestrator | 2026-01-02 00:44:48.460700 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-01-02 00:44:48.460711 | orchestrator | Friday 02 January 2026 00:44:46 +0000 (0:00:00.337) 0:01:05.814 ******** 2026-01-02 00:44:48.460722 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-804dd052-7dd8-5ffa-9f76-70ebd20e36f7', 'data_vg': 'ceph-804dd052-7dd8-5ffa-9f76-70ebd20e36f7'})  2026-01-02 00:44:48.460732 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8699efe3-2ea7-5359-bcef-4eac218b02a9', 'data_vg': 'ceph-8699efe3-2ea7-5359-bcef-4eac218b02a9'})  2026-01-02 00:44:48.460743 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:44:48.460755 | orchestrator | 2026-01-02 00:44:48.460765 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-01-02 00:44:48.460776 | orchestrator | Friday 02 January 2026 00:44:46 +0000 (0:00:00.153) 0:01:05.967 ******** 2026-01-02 00:44:48.460786 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-804dd052-7dd8-5ffa-9f76-70ebd20e36f7', 'data_vg': 'ceph-804dd052-7dd8-5ffa-9f76-70ebd20e36f7'})  2026-01-02 00:44:48.460799 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8699efe3-2ea7-5359-bcef-4eac218b02a9', 'data_vg': 'ceph-8699efe3-2ea7-5359-bcef-4eac218b02a9'})  2026-01-02 00:44:48.460812 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:44:48.460824 | orchestrator | 2026-01-02 00:44:48.460836 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-01-02 00:44:48.460848 | orchestrator | Friday 02 January 2026 00:44:46 +0000 (0:00:00.143) 0:01:06.111 ******** 2026-01-02 00:44:48.460861 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:44:48.460874 | orchestrator | 2026-01-02 00:44:48.460887 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-01-02 00:44:48.460900 | orchestrator | Friday 02 January 2026 00:44:46 +0000 (0:00:00.528) 0:01:06.640 ******** 2026-01-02 00:44:48.460912 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:44:48.460925 | orchestrator | 2026-01-02 00:44:48.460937 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-01-02 00:44:48.460949 | orchestrator | Friday 02 January 2026 00:44:47 +0000 (0:00:00.533) 0:01:07.173 ******** 2026-01-02 00:44:48.460963 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:44:48.460975 | orchestrator | 2026-01-02 00:44:48.460987 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-01-02 00:44:48.460999 | orchestrator | Friday 02 January 2026 00:44:47 +0000 (0:00:00.147) 0:01:07.321 ******** 2026-01-02 00:44:48.461011 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-804dd052-7dd8-5ffa-9f76-70ebd20e36f7', 'vg_name': 'ceph-804dd052-7dd8-5ffa-9f76-70ebd20e36f7'}) 2026-01-02 00:44:48.461026 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-8699efe3-2ea7-5359-bcef-4eac218b02a9', 'vg_name': 'ceph-8699efe3-2ea7-5359-bcef-4eac218b02a9'}) 2026-01-02 00:44:48.461080 | orchestrator | 2026-01-02 00:44:48.461095 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-01-02 00:44:48.461107 | orchestrator | Friday 02 January 2026 00:44:47 +0000 (0:00:00.178) 0:01:07.500 ******** 2026-01-02 00:44:48.461137 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-804dd052-7dd8-5ffa-9f76-70ebd20e36f7', 'data_vg': 'ceph-804dd052-7dd8-5ffa-9f76-70ebd20e36f7'})  2026-01-02 00:44:48.461151 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8699efe3-2ea7-5359-bcef-4eac218b02a9', 'data_vg': 'ceph-8699efe3-2ea7-5359-bcef-4eac218b02a9'})  2026-01-02 00:44:48.461163 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:44:48.461174 | orchestrator | 2026-01-02 00:44:48.461185 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-01-02 00:44:48.461197 | orchestrator | Friday 02 January 2026 00:44:47 +0000 (0:00:00.156) 0:01:07.656 ******** 2026-01-02 00:44:48.461207 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-804dd052-7dd8-5ffa-9f76-70ebd20e36f7', 'data_vg': 'ceph-804dd052-7dd8-5ffa-9f76-70ebd20e36f7'})  2026-01-02 00:44:48.461218 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8699efe3-2ea7-5359-bcef-4eac218b02a9', 'data_vg': 'ceph-8699efe3-2ea7-5359-bcef-4eac218b02a9'})  2026-01-02 00:44:48.461229 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:44:48.461240 | orchestrator | 2026-01-02 00:44:48.461250 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-01-02 00:44:48.461261 | orchestrator | Friday 02 January 2026 00:44:48 +0000 (0:00:00.149) 0:01:07.806 ******** 2026-01-02 00:44:48.461272 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-804dd052-7dd8-5ffa-9f76-70ebd20e36f7', 'data_vg': 'ceph-804dd052-7dd8-5ffa-9f76-70ebd20e36f7'})  2026-01-02 00:44:48.461282 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8699efe3-2ea7-5359-bcef-4eac218b02a9', 'data_vg': 'ceph-8699efe3-2ea7-5359-bcef-4eac218b02a9'})  2026-01-02 00:44:48.461293 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:44:48.461304 | orchestrator | 2026-01-02 00:44:48.461314 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-01-02 00:44:48.461325 | orchestrator | Friday 02 January 2026 00:44:48 +0000 (0:00:00.168) 0:01:07.974 ******** 2026-01-02 00:44:48.461336 | orchestrator | ok: [testbed-node-5] => { 2026-01-02 00:44:48.461346 | orchestrator |  "lvm_report": { 2026-01-02 00:44:48.461357 | orchestrator |  "lv": [ 2026-01-02 00:44:48.461368 | orchestrator |  { 2026-01-02 00:44:48.461385 | orchestrator |  "lv_name": "osd-block-804dd052-7dd8-5ffa-9f76-70ebd20e36f7", 2026-01-02 00:44:48.461396 | orchestrator |  "vg_name": "ceph-804dd052-7dd8-5ffa-9f76-70ebd20e36f7" 2026-01-02 00:44:48.461407 | orchestrator |  }, 2026-01-02 00:44:48.461418 | orchestrator |  { 2026-01-02 00:44:48.461429 | orchestrator |  "lv_name": "osd-block-8699efe3-2ea7-5359-bcef-4eac218b02a9", 2026-01-02 00:44:48.461440 | orchestrator |  "vg_name": "ceph-8699efe3-2ea7-5359-bcef-4eac218b02a9" 2026-01-02 00:44:48.461450 | orchestrator |  } 2026-01-02 00:44:48.461461 | orchestrator |  ], 2026-01-02 00:44:48.461472 | orchestrator |  "pv": [ 2026-01-02 00:44:48.461482 | orchestrator |  { 2026-01-02 00:44:48.461493 | orchestrator |  "pv_name": "/dev/sdb", 2026-01-02 00:44:48.461504 | orchestrator |  "vg_name": "ceph-804dd052-7dd8-5ffa-9f76-70ebd20e36f7" 2026-01-02 00:44:48.461514 | orchestrator |  }, 2026-01-02 00:44:48.461525 | orchestrator |  { 2026-01-02 00:44:48.461535 | orchestrator |  "pv_name": "/dev/sdc", 2026-01-02 00:44:48.461546 | orchestrator |  "vg_name": "ceph-8699efe3-2ea7-5359-bcef-4eac218b02a9" 2026-01-02 00:44:48.461557 | orchestrator |  } 2026-01-02 00:44:48.461568 | orchestrator |  ] 2026-01-02 00:44:48.461600 | orchestrator |  } 2026-01-02 00:44:48.461611 | orchestrator | } 2026-01-02 00:44:48.461623 | orchestrator | 2026-01-02 00:44:48.461634 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-02 00:44:48.461645 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-01-02 00:44:48.461656 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-01-02 00:44:48.461667 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-01-02 00:44:48.461678 | orchestrator | 2026-01-02 00:44:48.461689 | orchestrator | 2026-01-02 00:44:48.461699 | orchestrator | 2026-01-02 00:44:48.461710 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-02 00:44:48.461721 | orchestrator | Friday 02 January 2026 00:44:48 +0000 (0:00:00.147) 0:01:08.122 ******** 2026-01-02 00:44:48.461732 | orchestrator | =============================================================================== 2026-01-02 00:44:48.461742 | orchestrator | Create block VGs -------------------------------------------------------- 5.65s 2026-01-02 00:44:48.461753 | orchestrator | Create block LVs -------------------------------------------------------- 4.01s 2026-01-02 00:44:48.461763 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.71s 2026-01-02 00:44:48.461774 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.62s 2026-01-02 00:44:48.461785 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.58s 2026-01-02 00:44:48.461795 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.54s 2026-01-02 00:44:48.461806 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.52s 2026-01-02 00:44:48.461817 | orchestrator | Add known partitions to the list of available block devices ------------- 1.30s 2026-01-02 00:44:48.461834 | orchestrator | Add known links to the list of available block devices ------------------ 1.21s 2026-01-02 00:44:48.784491 | orchestrator | Add known partitions to the list of available block devices ------------- 1.06s 2026-01-02 00:44:48.784589 | orchestrator | Add known partitions to the list of available block devices ------------- 0.96s 2026-01-02 00:44:48.784604 | orchestrator | Print LVM report data --------------------------------------------------- 0.84s 2026-01-02 00:44:48.784616 | orchestrator | Add known links to the list of available block devices ------------------ 0.78s 2026-01-02 00:44:48.784627 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.70s 2026-01-02 00:44:48.784638 | orchestrator | Get initial list of available block devices ----------------------------- 0.68s 2026-01-02 00:44:48.784650 | orchestrator | Add known partitions to the list of available block devices ------------- 0.67s 2026-01-02 00:44:48.784671 | orchestrator | Add known links to the list of available block devices ------------------ 0.67s 2026-01-02 00:44:48.784700 | orchestrator | Fail if block LV defined in lvm_volumes is missing ---------------------- 0.65s 2026-01-02 00:44:48.784721 | orchestrator | Print 'Create WAL LVs for ceph_db_wal_devices' -------------------------- 0.64s 2026-01-02 00:44:48.784741 | orchestrator | Print 'Create WAL VGs' -------------------------------------------------- 0.63s 2026-01-02 00:45:01.835594 | orchestrator | 2026-01-02 00:45:01 | INFO  | Task 6db65dd9-8d17-4554-baec-1ccd6d30be4d (facts) was prepared for execution. 2026-01-02 00:45:01.835713 | orchestrator | 2026-01-02 00:45:01 | INFO  | It takes a moment until task 6db65dd9-8d17-4554-baec-1ccd6d30be4d (facts) has been started and output is visible here. 2026-01-02 00:45:13.678477 | orchestrator | 2026-01-02 00:45:13.678591 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-01-02 00:45:13.678609 | orchestrator | 2026-01-02 00:45:13.678622 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-01-02 00:45:13.678634 | orchestrator | Friday 02 January 2026 00:45:05 +0000 (0:00:00.245) 0:00:00.245 ******** 2026-01-02 00:45:13.678674 | orchestrator | ok: [testbed-manager] 2026-01-02 00:45:13.678687 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:45:13.678698 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:45:13.678709 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:45:13.678719 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:45:13.678730 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:45:13.678741 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:45:13.678751 | orchestrator | 2026-01-02 00:45:13.678762 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-01-02 00:45:13.678774 | orchestrator | Friday 02 January 2026 00:45:06 +0000 (0:00:00.983) 0:00:01.229 ******** 2026-01-02 00:45:13.678785 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:45:13.678797 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:45:13.678807 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:45:13.678818 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:45:13.678829 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:45:13.678840 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:45:13.678850 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:45:13.678861 | orchestrator | 2026-01-02 00:45:13.678872 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-01-02 00:45:13.678883 | orchestrator | 2026-01-02 00:45:13.678894 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-01-02 00:45:13.678905 | orchestrator | Friday 02 January 2026 00:45:08 +0000 (0:00:01.044) 0:00:02.274 ******** 2026-01-02 00:45:13.678916 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:45:13.678926 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:45:13.678937 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:45:13.678948 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:45:13.678959 | orchestrator | ok: [testbed-manager] 2026-01-02 00:45:13.678969 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:45:13.678980 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:45:13.678991 | orchestrator | 2026-01-02 00:45:13.679002 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-01-02 00:45:13.679015 | orchestrator | 2026-01-02 00:45:13.679028 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-01-02 00:45:13.679089 | orchestrator | Friday 02 January 2026 00:45:12 +0000 (0:00:04.845) 0:00:07.119 ******** 2026-01-02 00:45:13.679102 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:45:13.679115 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:45:13.679128 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:45:13.679140 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:45:13.679152 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:45:13.679165 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:45:13.679177 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:45:13.679189 | orchestrator | 2026-01-02 00:45:13.679202 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-02 00:45:13.679214 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-02 00:45:13.679228 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-02 00:45:13.679242 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-02 00:45:13.679255 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-02 00:45:13.679267 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-02 00:45:13.679279 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-02 00:45:13.679299 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-02 00:45:13.679312 | orchestrator | 2026-01-02 00:45:13.679325 | orchestrator | 2026-01-02 00:45:13.679337 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-02 00:45:13.679350 | orchestrator | Friday 02 January 2026 00:45:13 +0000 (0:00:00.508) 0:00:07.628 ******** 2026-01-02 00:45:13.679362 | orchestrator | =============================================================================== 2026-01-02 00:45:13.679373 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.85s 2026-01-02 00:45:13.679384 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.04s 2026-01-02 00:45:13.679395 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 0.98s 2026-01-02 00:45:13.679405 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.51s 2026-01-02 00:45:25.981873 | orchestrator | 2026-01-02 00:45:25 | INFO  | Task c2b75835-adb0-4504-8fe4-f41eec436cfd (frr) was prepared for execution. 2026-01-02 00:45:25.982005 | orchestrator | 2026-01-02 00:45:25 | INFO  | It takes a moment until task c2b75835-adb0-4504-8fe4-f41eec436cfd (frr) has been started and output is visible here. 2026-01-02 00:45:48.758393 | orchestrator | 2026-01-02 00:45:48.758521 | orchestrator | PLAY [Apply role frr] ********************************************************** 2026-01-02 00:45:48.758543 | orchestrator | 2026-01-02 00:45:48.758560 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2026-01-02 00:45:48.758597 | orchestrator | Friday 02 January 2026 00:45:29 +0000 (0:00:00.167) 0:00:00.167 ******** 2026-01-02 00:45:48.758613 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2026-01-02 00:45:48.758630 | orchestrator | 2026-01-02 00:45:48.758646 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2026-01-02 00:45:48.758661 | orchestrator | Friday 02 January 2026 00:45:29 +0000 (0:00:00.165) 0:00:00.333 ******** 2026-01-02 00:45:48.758676 | orchestrator | changed: [testbed-manager] 2026-01-02 00:45:48.758693 | orchestrator | 2026-01-02 00:45:48.758708 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2026-01-02 00:45:48.758733 | orchestrator | Friday 02 January 2026 00:45:30 +0000 (0:00:00.887) 0:00:01.220 ******** 2026-01-02 00:45:48.758750 | orchestrator | changed: [testbed-manager] 2026-01-02 00:45:48.758765 | orchestrator | 2026-01-02 00:45:48.758781 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2026-01-02 00:45:48.758796 | orchestrator | Friday 02 January 2026 00:45:39 +0000 (0:00:08.478) 0:00:09.699 ******** 2026-01-02 00:45:48.758811 | orchestrator | ok: [testbed-manager] 2026-01-02 00:45:48.758827 | orchestrator | 2026-01-02 00:45:48.758842 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2026-01-02 00:45:48.758857 | orchestrator | Friday 02 January 2026 00:45:40 +0000 (0:00:00.964) 0:00:10.664 ******** 2026-01-02 00:45:48.758872 | orchestrator | changed: [testbed-manager] 2026-01-02 00:45:48.758888 | orchestrator | 2026-01-02 00:45:48.758902 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2026-01-02 00:45:48.758918 | orchestrator | Friday 02 January 2026 00:45:41 +0000 (0:00:00.916) 0:00:11.581 ******** 2026-01-02 00:45:48.758936 | orchestrator | ok: [testbed-manager] 2026-01-02 00:45:48.758952 | orchestrator | 2026-01-02 00:45:48.758968 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2026-01-02 00:45:48.758987 | orchestrator | Friday 02 January 2026 00:45:42 +0000 (0:00:01.123) 0:00:12.705 ******** 2026-01-02 00:45:48.759005 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:45:48.759047 | orchestrator | 2026-01-02 00:45:48.759062 | orchestrator | TASK [osism.services.frr : Copy frr.conf file from the configuration repository] *** 2026-01-02 00:45:48.759075 | orchestrator | Friday 02 January 2026 00:45:42 +0000 (0:00:00.129) 0:00:12.834 ******** 2026-01-02 00:45:48.759109 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:45:48.759121 | orchestrator | 2026-01-02 00:45:48.759133 | orchestrator | TASK [osism.services.frr : Copy default frr.conf file of type k3s_cilium] ****** 2026-01-02 00:45:48.759145 | orchestrator | Friday 02 January 2026 00:45:42 +0000 (0:00:00.320) 0:00:13.155 ******** 2026-01-02 00:45:48.759156 | orchestrator | changed: [testbed-manager] 2026-01-02 00:45:48.759167 | orchestrator | 2026-01-02 00:45:48.759178 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2026-01-02 00:45:48.759190 | orchestrator | Friday 02 January 2026 00:45:43 +0000 (0:00:00.965) 0:00:14.120 ******** 2026-01-02 00:45:48.759201 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2026-01-02 00:45:48.759212 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2026-01-02 00:45:48.759225 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2026-01-02 00:45:48.759237 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2026-01-02 00:45:48.759249 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2026-01-02 00:45:48.759260 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2026-01-02 00:45:48.759271 | orchestrator | 2026-01-02 00:45:48.759281 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2026-01-02 00:45:48.759291 | orchestrator | Friday 02 January 2026 00:45:45 +0000 (0:00:02.125) 0:00:16.246 ******** 2026-01-02 00:45:48.759300 | orchestrator | ok: [testbed-manager] 2026-01-02 00:45:48.759310 | orchestrator | 2026-01-02 00:45:48.759320 | orchestrator | RUNNING HANDLER [osism.services.frr : Restart frr service] ********************* 2026-01-02 00:45:48.759329 | orchestrator | Friday 02 January 2026 00:45:47 +0000 (0:00:01.490) 0:00:17.736 ******** 2026-01-02 00:45:48.759339 | orchestrator | changed: [testbed-manager] 2026-01-02 00:45:48.759348 | orchestrator | 2026-01-02 00:45:48.759358 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-02 00:45:48.759369 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-02 00:45:48.759379 | orchestrator | 2026-01-02 00:45:48.759388 | orchestrator | 2026-01-02 00:45:48.759398 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-02 00:45:48.759408 | orchestrator | Friday 02 January 2026 00:45:48 +0000 (0:00:01.343) 0:00:19.079 ******** 2026-01-02 00:45:48.759418 | orchestrator | =============================================================================== 2026-01-02 00:45:48.759428 | orchestrator | osism.services.frr : Install frr package -------------------------------- 8.48s 2026-01-02 00:45:48.759437 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 2.13s 2026-01-02 00:45:48.759447 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 1.49s 2026-01-02 00:45:48.759456 | orchestrator | osism.services.frr : Restart frr service -------------------------------- 1.34s 2026-01-02 00:45:48.759466 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 1.12s 2026-01-02 00:45:48.759496 | orchestrator | osism.services.frr : Copy default frr.conf file of type k3s_cilium ------ 0.97s 2026-01-02 00:45:48.759506 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 0.96s 2026-01-02 00:45:48.759516 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 0.92s 2026-01-02 00:45:48.759526 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 0.89s 2026-01-02 00:45:48.759536 | orchestrator | osism.services.frr : Copy frr.conf file from the configuration repository --- 0.32s 2026-01-02 00:45:48.759545 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 0.17s 2026-01-02 00:45:48.759555 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 0.13s 2026-01-02 00:45:49.018426 | orchestrator | 2026-01-02 00:45:49.021604 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Fri Jan 2 00:45:49 UTC 2026 2026-01-02 00:45:49.021635 | orchestrator | 2026-01-02 00:45:50.919542 | orchestrator | 2026-01-02 00:45:50 | INFO  | Collection nutshell is prepared for execution 2026-01-02 00:45:50.919670 | orchestrator | 2026-01-02 00:45:50 | INFO  | A [0] - dotfiles 2026-01-02 00:46:00.934835 | orchestrator | 2026-01-02 00:46:00 | INFO  | A [0] - homer 2026-01-02 00:46:00.934938 | orchestrator | 2026-01-02 00:46:00 | INFO  | A [0] - netdata 2026-01-02 00:46:00.934953 | orchestrator | 2026-01-02 00:46:00 | INFO  | A [0] - openstackclient 2026-01-02 00:46:00.934965 | orchestrator | 2026-01-02 00:46:00 | INFO  | A [0] - phpmyadmin 2026-01-02 00:46:00.934977 | orchestrator | 2026-01-02 00:46:00 | INFO  | A [0] - common 2026-01-02 00:46:00.936888 | orchestrator | 2026-01-02 00:46:00 | INFO  | A [1] -- loadbalancer 2026-01-02 00:46:00.936912 | orchestrator | 2026-01-02 00:46:00 | INFO  | A [2] --- opensearch 2026-01-02 00:46:00.936923 | orchestrator | 2026-01-02 00:46:00 | INFO  | A [2] --- mariadb-ng 2026-01-02 00:46:00.937223 | orchestrator | 2026-01-02 00:46:00 | INFO  | A [3] ---- horizon 2026-01-02 00:46:00.937306 | orchestrator | 2026-01-02 00:46:00 | INFO  | A [3] ---- keystone 2026-01-02 00:46:00.937319 | orchestrator | 2026-01-02 00:46:00 | INFO  | A [4] ----- neutron 2026-01-02 00:46:00.937329 | orchestrator | 2026-01-02 00:46:00 | INFO  | A [5] ------ wait-for-nova 2026-01-02 00:46:00.937568 | orchestrator | 2026-01-02 00:46:00 | INFO  | A [6] ------- octavia 2026-01-02 00:46:00.938838 | orchestrator | 2026-01-02 00:46:00 | INFO  | A [4] ----- barbican 2026-01-02 00:46:00.938862 | orchestrator | 2026-01-02 00:46:00 | INFO  | A [4] ----- designate 2026-01-02 00:46:00.938871 | orchestrator | 2026-01-02 00:46:00 | INFO  | A [4] ----- ironic 2026-01-02 00:46:00.939108 | orchestrator | 2026-01-02 00:46:00 | INFO  | A [4] ----- placement 2026-01-02 00:46:00.939125 | orchestrator | 2026-01-02 00:46:00 | INFO  | A [4] ----- magnum 2026-01-02 00:46:00.939515 | orchestrator | 2026-01-02 00:46:00 | INFO  | A [1] -- openvswitch 2026-01-02 00:46:00.939533 | orchestrator | 2026-01-02 00:46:00 | INFO  | A [2] --- ovn 2026-01-02 00:46:00.939871 | orchestrator | 2026-01-02 00:46:00 | INFO  | A [1] -- memcached 2026-01-02 00:46:00.940115 | orchestrator | 2026-01-02 00:46:00 | INFO  | A [1] -- redis 2026-01-02 00:46:00.940133 | orchestrator | 2026-01-02 00:46:00 | INFO  | A [1] -- rabbitmq-ng 2026-01-02 00:46:00.940301 | orchestrator | 2026-01-02 00:46:00 | INFO  | A [0] - kubernetes 2026-01-02 00:46:00.942642 | orchestrator | 2026-01-02 00:46:00 | INFO  | A [1] -- kubeconfig 2026-01-02 00:46:00.942681 | orchestrator | 2026-01-02 00:46:00 | INFO  | A [1] -- copy-kubeconfig 2026-01-02 00:46:00.942828 | orchestrator | 2026-01-02 00:46:00 | INFO  | A [0] - ceph 2026-01-02 00:46:00.945348 | orchestrator | 2026-01-02 00:46:00 | INFO  | A [1] -- ceph-pools 2026-01-02 00:46:00.945394 | orchestrator | 2026-01-02 00:46:00 | INFO  | A [2] --- copy-ceph-keys 2026-01-02 00:46:00.945420 | orchestrator | 2026-01-02 00:46:00 | INFO  | A [3] ---- cephclient 2026-01-02 00:46:00.945433 | orchestrator | 2026-01-02 00:46:00 | INFO  | A [4] ----- ceph-bootstrap-dashboard 2026-01-02 00:46:00.945792 | orchestrator | 2026-01-02 00:46:00 | INFO  | A [4] ----- wait-for-keystone 2026-01-02 00:46:00.945828 | orchestrator | 2026-01-02 00:46:00 | INFO  | A [5] ------ kolla-ceph-rgw 2026-01-02 00:46:00.945855 | orchestrator | 2026-01-02 00:46:00 | INFO  | A [5] ------ glance 2026-01-02 00:46:00.946099 | orchestrator | 2026-01-02 00:46:00 | INFO  | A [5] ------ cinder 2026-01-02 00:46:00.946130 | orchestrator | 2026-01-02 00:46:00 | INFO  | A [5] ------ nova 2026-01-02 00:46:00.946399 | orchestrator | 2026-01-02 00:46:00 | INFO  | A [4] ----- prometheus 2026-01-02 00:46:00.946422 | orchestrator | 2026-01-02 00:46:00 | INFO  | A [5] ------ grafana 2026-01-02 00:46:01.131798 | orchestrator | 2026-01-02 00:46:01 | INFO  | All tasks of the collection nutshell are prepared for execution 2026-01-02 00:46:01.131903 | orchestrator | 2026-01-02 00:46:01 | INFO  | Tasks are running in the background 2026-01-02 00:46:04.312364 | orchestrator | 2026-01-02 00:46:04 | INFO  | No task IDs specified, wait for all currently running tasks 2026-01-02 00:46:06.416111 | orchestrator | 2026-01-02 00:46:06 | INFO  | Task fa824b0a-3afc-4801-ac5e-f5087d2f3952 is in state STARTED 2026-01-02 00:46:06.418694 | orchestrator | 2026-01-02 00:46:06 | INFO  | Task f7d8135e-972a-4d2f-a12c-4f270050d7c2 is in state STARTED 2026-01-02 00:46:06.419030 | orchestrator | 2026-01-02 00:46:06 | INFO  | Task c8f607a5-1f4d-45bc-9b0c-2a7f5ff91896 is in state STARTED 2026-01-02 00:46:06.419623 | orchestrator | 2026-01-02 00:46:06 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:46:06.423060 | orchestrator | 2026-01-02 00:46:06 | INFO  | Task 86403bf5-b644-4b10-a6ba-c4130d65c270 is in state STARTED 2026-01-02 00:46:06.423564 | orchestrator | 2026-01-02 00:46:06 | INFO  | Task 4be33a19-2458-4148-89ea-de2c02e24c49 is in state STARTED 2026-01-02 00:46:06.424125 | orchestrator | 2026-01-02 00:46:06 | INFO  | Task 1d788a6f-1e75-4772-a449-bed3ebdfc8f7 is in state STARTED 2026-01-02 00:46:06.424215 | orchestrator | 2026-01-02 00:46:06 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:46:09.459744 | orchestrator | 2026-01-02 00:46:09 | INFO  | Task fa824b0a-3afc-4801-ac5e-f5087d2f3952 is in state STARTED 2026-01-02 00:46:09.459845 | orchestrator | 2026-01-02 00:46:09 | INFO  | Task f7d8135e-972a-4d2f-a12c-4f270050d7c2 is in state STARTED 2026-01-02 00:46:09.460296 | orchestrator | 2026-01-02 00:46:09 | INFO  | Task c8f607a5-1f4d-45bc-9b0c-2a7f5ff91896 is in state STARTED 2026-01-02 00:46:09.463195 | orchestrator | 2026-01-02 00:46:09 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:46:09.463650 | orchestrator | 2026-01-02 00:46:09 | INFO  | Task 86403bf5-b644-4b10-a6ba-c4130d65c270 is in state STARTED 2026-01-02 00:46:09.464223 | orchestrator | 2026-01-02 00:46:09 | INFO  | Task 4be33a19-2458-4148-89ea-de2c02e24c49 is in state STARTED 2026-01-02 00:46:09.464909 | orchestrator | 2026-01-02 00:46:09 | INFO  | Task 1d788a6f-1e75-4772-a449-bed3ebdfc8f7 is in state STARTED 2026-01-02 00:46:09.464928 | orchestrator | 2026-01-02 00:46:09 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:46:12.497389 | orchestrator | 2026-01-02 00:46:12 | INFO  | Task fa824b0a-3afc-4801-ac5e-f5087d2f3952 is in state STARTED 2026-01-02 00:46:12.497510 | orchestrator | 2026-01-02 00:46:12 | INFO  | Task f7d8135e-972a-4d2f-a12c-4f270050d7c2 is in state STARTED 2026-01-02 00:46:12.497526 | orchestrator | 2026-01-02 00:46:12 | INFO  | Task c8f607a5-1f4d-45bc-9b0c-2a7f5ff91896 is in state STARTED 2026-01-02 00:46:12.497538 | orchestrator | 2026-01-02 00:46:12 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:46:12.497549 | orchestrator | 2026-01-02 00:46:12 | INFO  | Task 86403bf5-b644-4b10-a6ba-c4130d65c270 is in state STARTED 2026-01-02 00:46:12.497560 | orchestrator | 2026-01-02 00:46:12 | INFO  | Task 4be33a19-2458-4148-89ea-de2c02e24c49 is in state STARTED 2026-01-02 00:46:12.498319 | orchestrator | 2026-01-02 00:46:12 | INFO  | Task 1d788a6f-1e75-4772-a449-bed3ebdfc8f7 is in state STARTED 2026-01-02 00:46:12.499776 | orchestrator | 2026-01-02 00:46:12 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:46:15.757367 | orchestrator | 2026-01-02 00:46:15 | INFO  | Task fa824b0a-3afc-4801-ac5e-f5087d2f3952 is in state STARTED 2026-01-02 00:46:15.757464 | orchestrator | 2026-01-02 00:46:15 | INFO  | Task f7d8135e-972a-4d2f-a12c-4f270050d7c2 is in state STARTED 2026-01-02 00:46:15.757980 | orchestrator | 2026-01-02 00:46:15 | INFO  | Task c8f607a5-1f4d-45bc-9b0c-2a7f5ff91896 is in state STARTED 2026-01-02 00:46:15.759783 | orchestrator | 2026-01-02 00:46:15 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:46:15.760343 | orchestrator | 2026-01-02 00:46:15 | INFO  | Task 86403bf5-b644-4b10-a6ba-c4130d65c270 is in state STARTED 2026-01-02 00:46:15.762996 | orchestrator | 2026-01-02 00:46:15 | INFO  | Task 4be33a19-2458-4148-89ea-de2c02e24c49 is in state STARTED 2026-01-02 00:46:15.763559 | orchestrator | 2026-01-02 00:46:15 | INFO  | Task 1d788a6f-1e75-4772-a449-bed3ebdfc8f7 is in state STARTED 2026-01-02 00:46:15.763583 | orchestrator | 2026-01-02 00:46:15 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:46:18.849327 | orchestrator | 2026-01-02 00:46:18 | INFO  | Task fa824b0a-3afc-4801-ac5e-f5087d2f3952 is in state STARTED 2026-01-02 00:46:18.849424 | orchestrator | 2026-01-02 00:46:18 | INFO  | Task f7d8135e-972a-4d2f-a12c-4f270050d7c2 is in state STARTED 2026-01-02 00:46:18.849436 | orchestrator | 2026-01-02 00:46:18 | INFO  | Task c8f607a5-1f4d-45bc-9b0c-2a7f5ff91896 is in state STARTED 2026-01-02 00:46:18.849444 | orchestrator | 2026-01-02 00:46:18 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:46:18.849452 | orchestrator | 2026-01-02 00:46:18 | INFO  | Task 86403bf5-b644-4b10-a6ba-c4130d65c270 is in state STARTED 2026-01-02 00:46:18.849460 | orchestrator | 2026-01-02 00:46:18 | INFO  | Task 4be33a19-2458-4148-89ea-de2c02e24c49 is in state STARTED 2026-01-02 00:46:18.849467 | orchestrator | 2026-01-02 00:46:18 | INFO  | Task 1d788a6f-1e75-4772-a449-bed3ebdfc8f7 is in state STARTED 2026-01-02 00:46:18.849475 | orchestrator | 2026-01-02 00:46:18 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:46:21.969254 | orchestrator | 2026-01-02 00:46:21 | INFO  | Task fa824b0a-3afc-4801-ac5e-f5087d2f3952 is in state STARTED 2026-01-02 00:46:21.969362 | orchestrator | 2026-01-02 00:46:21 | INFO  | Task f7d8135e-972a-4d2f-a12c-4f270050d7c2 is in state STARTED 2026-01-02 00:46:21.975217 | orchestrator | 2026-01-02 00:46:21 | INFO  | Task c8f607a5-1f4d-45bc-9b0c-2a7f5ff91896 is in state STARTED 2026-01-02 00:46:21.975269 | orchestrator | 2026-01-02 00:46:21 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:46:21.975288 | orchestrator | 2026-01-02 00:46:21 | INFO  | Task 86403bf5-b644-4b10-a6ba-c4130d65c270 is in state STARTED 2026-01-02 00:46:21.975851 | orchestrator | 2026-01-02 00:46:21 | INFO  | Task 4be33a19-2458-4148-89ea-de2c02e24c49 is in state STARTED 2026-01-02 00:46:21.976736 | orchestrator | 2026-01-02 00:46:21 | INFO  | Task 1d788a6f-1e75-4772-a449-bed3ebdfc8f7 is in state STARTED 2026-01-02 00:46:21.977703 | orchestrator | 2026-01-02 00:46:21 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:46:25.087923 | orchestrator | 2026-01-02 00:46:25 | INFO  | Task fa824b0a-3afc-4801-ac5e-f5087d2f3952 is in state STARTED 2026-01-02 00:46:25.091679 | orchestrator | 2026-01-02 00:46:25 | INFO  | Task f7d8135e-972a-4d2f-a12c-4f270050d7c2 is in state STARTED 2026-01-02 00:46:25.091757 | orchestrator | 2026-01-02 00:46:25 | INFO  | Task c8f607a5-1f4d-45bc-9b0c-2a7f5ff91896 is in state STARTED 2026-01-02 00:46:25.093863 | orchestrator | 2026-01-02 00:46:25 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:46:25.096694 | orchestrator | 2026-01-02 00:46:25 | INFO  | Task 86403bf5-b644-4b10-a6ba-c4130d65c270 is in state STARTED 2026-01-02 00:46:25.098280 | orchestrator | 2026-01-02 00:46:25 | INFO  | Task 4be33a19-2458-4148-89ea-de2c02e24c49 is in state STARTED 2026-01-02 00:46:25.098768 | orchestrator | 2026-01-02 00:46:25 | INFO  | Task 1d788a6f-1e75-4772-a449-bed3ebdfc8f7 is in state STARTED 2026-01-02 00:46:25.098839 | orchestrator | 2026-01-02 00:46:25 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:46:28.233610 | orchestrator | 2026-01-02 00:46:28.233745 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2026-01-02 00:46:28.233775 | orchestrator | 2026-01-02 00:46:28.234637 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2026-01-02 00:46:28.234669 | orchestrator | Friday 02 January 2026 00:46:13 +0000 (0:00:00.368) 0:00:00.368 ******** 2026-01-02 00:46:28.234689 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:46:28.234710 | orchestrator | changed: [testbed-manager] 2026-01-02 00:46:28.234731 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:46:28.234752 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:46:28.234769 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:46:28.234780 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:46:28.234791 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:46:28.234802 | orchestrator | 2026-01-02 00:46:28.234813 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2026-01-02 00:46:28.234825 | orchestrator | Friday 02 January 2026 00:46:16 +0000 (0:00:03.626) 0:00:03.994 ******** 2026-01-02 00:46:28.234836 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2026-01-02 00:46:28.234847 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2026-01-02 00:46:28.234858 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2026-01-02 00:46:28.234869 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2026-01-02 00:46:28.234880 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2026-01-02 00:46:28.234890 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2026-01-02 00:46:28.234901 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2026-01-02 00:46:28.234912 | orchestrator | 2026-01-02 00:46:28.234923 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2026-01-02 00:46:28.234935 | orchestrator | Friday 02 January 2026 00:46:18 +0000 (0:00:01.489) 0:00:05.483 ******** 2026-01-02 00:46:28.234951 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-01-02 00:46:17.492965', 'end': '2026-01-02 00:46:17.502416', 'delta': '0:00:00.009451', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-01-02 00:46:28.234977 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-01-02 00:46:17.451458', 'end': '2026-01-02 00:46:17.458417', 'delta': '0:00:00.006959', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-01-02 00:46:28.235037 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-01-02 00:46:17.473076', 'end': '2026-01-02 00:46:17.482165', 'delta': '0:00:00.009089', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-01-02 00:46:28.235082 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-01-02 00:46:17.623422', 'end': '2026-01-02 00:46:17.634099', 'delta': '0:00:00.010677', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-01-02 00:46:28.235095 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-01-02 00:46:17.819406', 'end': '2026-01-02 00:46:17.829098', 'delta': '0:00:00.009692', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-01-02 00:46:28.235429 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-01-02 00:46:17.950719', 'end': '2026-01-02 00:46:17.955957', 'delta': '0:00:00.005238', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-01-02 00:46:28.235450 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-01-02 00:46:18.088478', 'end': '2026-01-02 00:46:18.098651', 'delta': '0:00:00.010173', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-01-02 00:46:28.235479 | orchestrator | 2026-01-02 00:46:28.235491 | orchestrator | TASK [geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist.] **** 2026-01-02 00:46:28.235502 | orchestrator | Friday 02 January 2026 00:46:21 +0000 (0:00:03.017) 0:00:08.501 ******** 2026-01-02 00:46:28.235513 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2026-01-02 00:46:28.235525 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2026-01-02 00:46:28.235536 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2026-01-02 00:46:28.235546 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2026-01-02 00:46:28.235557 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2026-01-02 00:46:28.235568 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2026-01-02 00:46:28.235579 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2026-01-02 00:46:28.235590 | orchestrator | 2026-01-02 00:46:28.235601 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2026-01-02 00:46:28.235616 | orchestrator | Friday 02 January 2026 00:46:23 +0000 (0:00:01.905) 0:00:10.406 ******** 2026-01-02 00:46:28.235627 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2026-01-02 00:46:28.235638 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2026-01-02 00:46:28.235649 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2026-01-02 00:46:28.235660 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2026-01-02 00:46:28.235671 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2026-01-02 00:46:28.235682 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2026-01-02 00:46:28.235693 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2026-01-02 00:46:28.235704 | orchestrator | 2026-01-02 00:46:28.235720 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-02 00:46:28.235753 | orchestrator | testbed-manager : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-02 00:46:28.235773 | orchestrator | testbed-node-0 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-02 00:46:28.235792 | orchestrator | testbed-node-1 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-02 00:46:28.235809 | orchestrator | testbed-node-2 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-02 00:46:28.235827 | orchestrator | testbed-node-3 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-02 00:46:28.235845 | orchestrator | testbed-node-4 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-02 00:46:28.235862 | orchestrator | testbed-node-5 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-02 00:46:28.235879 | orchestrator | 2026-01-02 00:46:28.235895 | orchestrator | 2026-01-02 00:46:28.235912 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-02 00:46:28.235929 | orchestrator | Friday 02 January 2026 00:46:26 +0000 (0:00:02.952) 0:00:13.359 ******** 2026-01-02 00:46:28.235946 | orchestrator | =============================================================================== 2026-01-02 00:46:28.235963 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 3.63s 2026-01-02 00:46:28.235991 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 3.02s 2026-01-02 00:46:28.236034 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 2.95s 2026-01-02 00:46:28.236054 | orchestrator | geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist. ---- 1.91s 2026-01-02 00:46:28.236073 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 1.49s 2026-01-02 00:46:28.236093 | orchestrator | 2026-01-02 00:46:28 | INFO  | Task fa824b0a-3afc-4801-ac5e-f5087d2f3952 is in state SUCCESS 2026-01-02 00:46:28.236112 | orchestrator | 2026-01-02 00:46:28 | INFO  | Task f7d8135e-972a-4d2f-a12c-4f270050d7c2 is in state STARTED 2026-01-02 00:46:28.236132 | orchestrator | 2026-01-02 00:46:28 | INFO  | Task da85fc6f-674a-4e6c-9fef-7b9d54fefa9a is in state STARTED 2026-01-02 00:46:28.236151 | orchestrator | 2026-01-02 00:46:28 | INFO  | Task c8f607a5-1f4d-45bc-9b0c-2a7f5ff91896 is in state STARTED 2026-01-02 00:46:28.236170 | orchestrator | 2026-01-02 00:46:28 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:46:28.236188 | orchestrator | 2026-01-02 00:46:28 | INFO  | Task 86403bf5-b644-4b10-a6ba-c4130d65c270 is in state STARTED 2026-01-02 00:46:28.236207 | orchestrator | 2026-01-02 00:46:28 | INFO  | Task 4be33a19-2458-4148-89ea-de2c02e24c49 is in state STARTED 2026-01-02 00:46:28.236218 | orchestrator | 2026-01-02 00:46:28 | INFO  | Task 1d788a6f-1e75-4772-a449-bed3ebdfc8f7 is in state STARTED 2026-01-02 00:46:28.236229 | orchestrator | 2026-01-02 00:46:28 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:46:31.340401 | orchestrator | 2026-01-02 00:46:31 | INFO  | Task f7d8135e-972a-4d2f-a12c-4f270050d7c2 is in state STARTED 2026-01-02 00:46:31.340496 | orchestrator | 2026-01-02 00:46:31 | INFO  | Task da85fc6f-674a-4e6c-9fef-7b9d54fefa9a is in state STARTED 2026-01-02 00:46:31.340510 | orchestrator | 2026-01-02 00:46:31 | INFO  | Task c8f607a5-1f4d-45bc-9b0c-2a7f5ff91896 is in state STARTED 2026-01-02 00:46:31.340521 | orchestrator | 2026-01-02 00:46:31 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:46:31.340531 | orchestrator | 2026-01-02 00:46:31 | INFO  | Task 86403bf5-b644-4b10-a6ba-c4130d65c270 is in state STARTED 2026-01-02 00:46:31.340541 | orchestrator | 2026-01-02 00:46:31 | INFO  | Task 4be33a19-2458-4148-89ea-de2c02e24c49 is in state STARTED 2026-01-02 00:46:31.341167 | orchestrator | 2026-01-02 00:46:31 | INFO  | Task 1d788a6f-1e75-4772-a449-bed3ebdfc8f7 is in state STARTED 2026-01-02 00:46:31.341202 | orchestrator | 2026-01-02 00:46:31 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:46:34.394517 | orchestrator | 2026-01-02 00:46:34 | INFO  | Task f7d8135e-972a-4d2f-a12c-4f270050d7c2 is in state STARTED 2026-01-02 00:46:34.394643 | orchestrator | 2026-01-02 00:46:34 | INFO  | Task da85fc6f-674a-4e6c-9fef-7b9d54fefa9a is in state STARTED 2026-01-02 00:46:34.394668 | orchestrator | 2026-01-02 00:46:34 | INFO  | Task c8f607a5-1f4d-45bc-9b0c-2a7f5ff91896 is in state STARTED 2026-01-02 00:46:34.395401 | orchestrator | 2026-01-02 00:46:34 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:46:34.396108 | orchestrator | 2026-01-02 00:46:34 | INFO  | Task 86403bf5-b644-4b10-a6ba-c4130d65c270 is in state STARTED 2026-01-02 00:46:34.396565 | orchestrator | 2026-01-02 00:46:34 | INFO  | Task 4be33a19-2458-4148-89ea-de2c02e24c49 is in state STARTED 2026-01-02 00:46:34.397307 | orchestrator | 2026-01-02 00:46:34 | INFO  | Task 1d788a6f-1e75-4772-a449-bed3ebdfc8f7 is in state STARTED 2026-01-02 00:46:34.397333 | orchestrator | 2026-01-02 00:46:34 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:46:37.501747 | orchestrator | 2026-01-02 00:46:37 | INFO  | Task f7d8135e-972a-4d2f-a12c-4f270050d7c2 is in state STARTED 2026-01-02 00:46:37.502217 | orchestrator | 2026-01-02 00:46:37 | INFO  | Task da85fc6f-674a-4e6c-9fef-7b9d54fefa9a is in state STARTED 2026-01-02 00:46:37.503766 | orchestrator | 2026-01-02 00:46:37 | INFO  | Task c8f607a5-1f4d-45bc-9b0c-2a7f5ff91896 is in state STARTED 2026-01-02 00:46:37.505079 | orchestrator | 2026-01-02 00:46:37 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:46:37.506371 | orchestrator | 2026-01-02 00:46:37 | INFO  | Task 86403bf5-b644-4b10-a6ba-c4130d65c270 is in state STARTED 2026-01-02 00:46:37.507683 | orchestrator | 2026-01-02 00:46:37 | INFO  | Task 4be33a19-2458-4148-89ea-de2c02e24c49 is in state STARTED 2026-01-02 00:46:37.509026 | orchestrator | 2026-01-02 00:46:37 | INFO  | Task 1d788a6f-1e75-4772-a449-bed3ebdfc8f7 is in state STARTED 2026-01-02 00:46:37.509125 | orchestrator | 2026-01-02 00:46:37 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:46:40.556601 | orchestrator | 2026-01-02 00:46:40 | INFO  | Task f7d8135e-972a-4d2f-a12c-4f270050d7c2 is in state STARTED 2026-01-02 00:46:40.557525 | orchestrator | 2026-01-02 00:46:40 | INFO  | Task da85fc6f-674a-4e6c-9fef-7b9d54fefa9a is in state STARTED 2026-01-02 00:46:40.560851 | orchestrator | 2026-01-02 00:46:40 | INFO  | Task c8f607a5-1f4d-45bc-9b0c-2a7f5ff91896 is in state STARTED 2026-01-02 00:46:40.564535 | orchestrator | 2026-01-02 00:46:40 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:46:40.567218 | orchestrator | 2026-01-02 00:46:40 | INFO  | Task 86403bf5-b644-4b10-a6ba-c4130d65c270 is in state STARTED 2026-01-02 00:46:40.570871 | orchestrator | 2026-01-02 00:46:40 | INFO  | Task 4be33a19-2458-4148-89ea-de2c02e24c49 is in state STARTED 2026-01-02 00:46:40.573513 | orchestrator | 2026-01-02 00:46:40 | INFO  | Task 1d788a6f-1e75-4772-a449-bed3ebdfc8f7 is in state STARTED 2026-01-02 00:46:40.574272 | orchestrator | 2026-01-02 00:46:40 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:46:43.667306 | orchestrator | 2026-01-02 00:46:43 | INFO  | Task f7d8135e-972a-4d2f-a12c-4f270050d7c2 is in state STARTED 2026-01-02 00:46:43.670131 | orchestrator | 2026-01-02 00:46:43 | INFO  | Task da85fc6f-674a-4e6c-9fef-7b9d54fefa9a is in state STARTED 2026-01-02 00:46:43.673987 | orchestrator | 2026-01-02 00:46:43 | INFO  | Task c8f607a5-1f4d-45bc-9b0c-2a7f5ff91896 is in state STARTED 2026-01-02 00:46:43.674092 | orchestrator | 2026-01-02 00:46:43 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:46:43.675169 | orchestrator | 2026-01-02 00:46:43 | INFO  | Task 86403bf5-b644-4b10-a6ba-c4130d65c270 is in state STARTED 2026-01-02 00:46:43.677240 | orchestrator | 2026-01-02 00:46:43 | INFO  | Task 4be33a19-2458-4148-89ea-de2c02e24c49 is in state STARTED 2026-01-02 00:46:43.677811 | orchestrator | 2026-01-02 00:46:43 | INFO  | Task 1d788a6f-1e75-4772-a449-bed3ebdfc8f7 is in state STARTED 2026-01-02 00:46:43.677837 | orchestrator | 2026-01-02 00:46:43 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:46:46.803110 | orchestrator | 2026-01-02 00:46:46 | INFO  | Task f7d8135e-972a-4d2f-a12c-4f270050d7c2 is in state STARTED 2026-01-02 00:46:46.803241 | orchestrator | 2026-01-02 00:46:46 | INFO  | Task da85fc6f-674a-4e6c-9fef-7b9d54fefa9a is in state STARTED 2026-01-02 00:46:46.803257 | orchestrator | 2026-01-02 00:46:46 | INFO  | Task c8f607a5-1f4d-45bc-9b0c-2a7f5ff91896 is in state STARTED 2026-01-02 00:46:46.803269 | orchestrator | 2026-01-02 00:46:46 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:46:46.803304 | orchestrator | 2026-01-02 00:46:46 | INFO  | Task 86403bf5-b644-4b10-a6ba-c4130d65c270 is in state STARTED 2026-01-02 00:46:46.803316 | orchestrator | 2026-01-02 00:46:46 | INFO  | Task 4be33a19-2458-4148-89ea-de2c02e24c49 is in state STARTED 2026-01-02 00:46:46.803327 | orchestrator | 2026-01-02 00:46:46 | INFO  | Task 1d788a6f-1e75-4772-a449-bed3ebdfc8f7 is in state STARTED 2026-01-02 00:46:46.803338 | orchestrator | 2026-01-02 00:46:46 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:46:49.809832 | orchestrator | 2026-01-02 00:46:49 | INFO  | Task f7d8135e-972a-4d2f-a12c-4f270050d7c2 is in state STARTED 2026-01-02 00:46:49.809933 | orchestrator | 2026-01-02 00:46:49 | INFO  | Task da85fc6f-674a-4e6c-9fef-7b9d54fefa9a is in state STARTED 2026-01-02 00:46:49.809950 | orchestrator | 2026-01-02 00:46:49 | INFO  | Task c8f607a5-1f4d-45bc-9b0c-2a7f5ff91896 is in state STARTED 2026-01-02 00:46:49.810660 | orchestrator | 2026-01-02 00:46:49 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:46:49.813702 | orchestrator | 2026-01-02 00:46:49 | INFO  | Task 86403bf5-b644-4b10-a6ba-c4130d65c270 is in state STARTED 2026-01-02 00:46:49.814378 | orchestrator | 2026-01-02 00:46:49 | INFO  | Task 4be33a19-2458-4148-89ea-de2c02e24c49 is in state SUCCESS 2026-01-02 00:46:49.816224 | orchestrator | 2026-01-02 00:46:49 | INFO  | Task 1d788a6f-1e75-4772-a449-bed3ebdfc8f7 is in state STARTED 2026-01-02 00:46:49.816264 | orchestrator | 2026-01-02 00:46:49 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:46:53.013674 | orchestrator | 2026-01-02 00:46:52 | INFO  | Task f7d8135e-972a-4d2f-a12c-4f270050d7c2 is in state STARTED 2026-01-02 00:46:53.013765 | orchestrator | 2026-01-02 00:46:52 | INFO  | Task da85fc6f-674a-4e6c-9fef-7b9d54fefa9a is in state STARTED 2026-01-02 00:46:53.013775 | orchestrator | 2026-01-02 00:46:52 | INFO  | Task c8f607a5-1f4d-45bc-9b0c-2a7f5ff91896 is in state STARTED 2026-01-02 00:46:53.013784 | orchestrator | 2026-01-02 00:46:52 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:46:53.013791 | orchestrator | 2026-01-02 00:46:52 | INFO  | Task 86403bf5-b644-4b10-a6ba-c4130d65c270 is in state STARTED 2026-01-02 00:46:53.013798 | orchestrator | 2026-01-02 00:46:52 | INFO  | Task 1d788a6f-1e75-4772-a449-bed3ebdfc8f7 is in state STARTED 2026-01-02 00:46:53.013805 | orchestrator | 2026-01-02 00:46:52 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:46:56.294241 | orchestrator | 2026-01-02 00:46:55 | INFO  | Task f7d8135e-972a-4d2f-a12c-4f270050d7c2 is in state STARTED 2026-01-02 00:46:56.294344 | orchestrator | 2026-01-02 00:46:55 | INFO  | Task da85fc6f-674a-4e6c-9fef-7b9d54fefa9a is in state STARTED 2026-01-02 00:46:56.294358 | orchestrator | 2026-01-02 00:46:55 | INFO  | Task c8f607a5-1f4d-45bc-9b0c-2a7f5ff91896 is in state STARTED 2026-01-02 00:46:56.294371 | orchestrator | 2026-01-02 00:46:55 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:46:56.294383 | orchestrator | 2026-01-02 00:46:55 | INFO  | Task 86403bf5-b644-4b10-a6ba-c4130d65c270 is in state STARTED 2026-01-02 00:46:56.294394 | orchestrator | 2026-01-02 00:46:55 | INFO  | Task 1d788a6f-1e75-4772-a449-bed3ebdfc8f7 is in state STARTED 2026-01-02 00:46:56.294405 | orchestrator | 2026-01-02 00:46:55 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:46:58.985473 | orchestrator | 2026-01-02 00:46:58 | INFO  | Task f7d8135e-972a-4d2f-a12c-4f270050d7c2 is in state STARTED 2026-01-02 00:46:58.985576 | orchestrator | 2026-01-02 00:46:58 | INFO  | Task da85fc6f-674a-4e6c-9fef-7b9d54fefa9a is in state STARTED 2026-01-02 00:46:58.987053 | orchestrator | 2026-01-02 00:46:58 | INFO  | Task c8f607a5-1f4d-45bc-9b0c-2a7f5ff91896 is in state STARTED 2026-01-02 00:46:58.987092 | orchestrator | 2026-01-02 00:46:58 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:46:58.990326 | orchestrator | 2026-01-02 00:46:58 | INFO  | Task 86403bf5-b644-4b10-a6ba-c4130d65c270 is in state STARTED 2026-01-02 00:46:58.991731 | orchestrator | 2026-01-02 00:46:58 | INFO  | Task 1d788a6f-1e75-4772-a449-bed3ebdfc8f7 is in state SUCCESS 2026-01-02 00:46:58.991749 | orchestrator | 2026-01-02 00:46:58 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:47:02.024140 | orchestrator | 2026-01-02 00:47:02 | INFO  | Task f7d8135e-972a-4d2f-a12c-4f270050d7c2 is in state STARTED 2026-01-02 00:47:02.025198 | orchestrator | 2026-01-02 00:47:02 | INFO  | Task da85fc6f-674a-4e6c-9fef-7b9d54fefa9a is in state STARTED 2026-01-02 00:47:02.025820 | orchestrator | 2026-01-02 00:47:02 | INFO  | Task c8f607a5-1f4d-45bc-9b0c-2a7f5ff91896 is in state STARTED 2026-01-02 00:47:02.026479 | orchestrator | 2026-01-02 00:47:02 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:47:02.029394 | orchestrator | 2026-01-02 00:47:02 | INFO  | Task 86403bf5-b644-4b10-a6ba-c4130d65c270 is in state STARTED 2026-01-02 00:47:02.029471 | orchestrator | 2026-01-02 00:47:02 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:47:05.063121 | orchestrator | 2026-01-02 00:47:05 | INFO  | Task f7d8135e-972a-4d2f-a12c-4f270050d7c2 is in state STARTED 2026-01-02 00:47:05.063616 | orchestrator | 2026-01-02 00:47:05 | INFO  | Task da85fc6f-674a-4e6c-9fef-7b9d54fefa9a is in state STARTED 2026-01-02 00:47:05.064046 | orchestrator | 2026-01-02 00:47:05 | INFO  | Task c8f607a5-1f4d-45bc-9b0c-2a7f5ff91896 is in state STARTED 2026-01-02 00:47:05.064877 | orchestrator | 2026-01-02 00:47:05 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:47:05.065650 | orchestrator | 2026-01-02 00:47:05 | INFO  | Task 86403bf5-b644-4b10-a6ba-c4130d65c270 is in state STARTED 2026-01-02 00:47:05.065676 | orchestrator | 2026-01-02 00:47:05 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:47:08.096107 | orchestrator | 2026-01-02 00:47:08 | INFO  | Task f7d8135e-972a-4d2f-a12c-4f270050d7c2 is in state STARTED 2026-01-02 00:47:08.097242 | orchestrator | 2026-01-02 00:47:08 | INFO  | Task da85fc6f-674a-4e6c-9fef-7b9d54fefa9a is in state STARTED 2026-01-02 00:47:08.097652 | orchestrator | 2026-01-02 00:47:08 | INFO  | Task c8f607a5-1f4d-45bc-9b0c-2a7f5ff91896 is in state STARTED 2026-01-02 00:47:08.098848 | orchestrator | 2026-01-02 00:47:08 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:47:08.100084 | orchestrator | 2026-01-02 00:47:08 | INFO  | Task 86403bf5-b644-4b10-a6ba-c4130d65c270 is in state STARTED 2026-01-02 00:47:08.100117 | orchestrator | 2026-01-02 00:47:08 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:47:11.133383 | orchestrator | 2026-01-02 00:47:11 | INFO  | Task f7d8135e-972a-4d2f-a12c-4f270050d7c2 is in state STARTED 2026-01-02 00:47:11.133961 | orchestrator | 2026-01-02 00:47:11 | INFO  | Task da85fc6f-674a-4e6c-9fef-7b9d54fefa9a is in state STARTED 2026-01-02 00:47:11.134332 | orchestrator | 2026-01-02 00:47:11 | INFO  | Task c8f607a5-1f4d-45bc-9b0c-2a7f5ff91896 is in state STARTED 2026-01-02 00:47:11.134978 | orchestrator | 2026-01-02 00:47:11 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:47:11.135902 | orchestrator | 2026-01-02 00:47:11 | INFO  | Task 86403bf5-b644-4b10-a6ba-c4130d65c270 is in state STARTED 2026-01-02 00:47:11.135965 | orchestrator | 2026-01-02 00:47:11 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:47:14.183726 | orchestrator | 2026-01-02 00:47:14 | INFO  | Task f7d8135e-972a-4d2f-a12c-4f270050d7c2 is in state STARTED 2026-01-02 00:47:14.186568 | orchestrator | 2026-01-02 00:47:14 | INFO  | Task da85fc6f-674a-4e6c-9fef-7b9d54fefa9a is in state STARTED 2026-01-02 00:47:14.187537 | orchestrator | 2026-01-02 00:47:14 | INFO  | Task c8f607a5-1f4d-45bc-9b0c-2a7f5ff91896 is in state STARTED 2026-01-02 00:47:14.188602 | orchestrator | 2026-01-02 00:47:14 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:47:14.190437 | orchestrator | 2026-01-02 00:47:14 | INFO  | Task 86403bf5-b644-4b10-a6ba-c4130d65c270 is in state STARTED 2026-01-02 00:47:14.190509 | orchestrator | 2026-01-02 00:47:14 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:47:17.238566 | orchestrator | 2026-01-02 00:47:17 | INFO  | Task f7d8135e-972a-4d2f-a12c-4f270050d7c2 is in state STARTED 2026-01-02 00:47:17.242752 | orchestrator | 2026-01-02 00:47:17 | INFO  | Task da85fc6f-674a-4e6c-9fef-7b9d54fefa9a is in state STARTED 2026-01-02 00:47:17.245149 | orchestrator | 2026-01-02 00:47:17 | INFO  | Task c8f607a5-1f4d-45bc-9b0c-2a7f5ff91896 is in state STARTED 2026-01-02 00:47:17.267048 | orchestrator | 2026-01-02 00:47:17 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:47:17.268448 | orchestrator | 2026-01-02 00:47:17 | INFO  | Task 86403bf5-b644-4b10-a6ba-c4130d65c270 is in state STARTED 2026-01-02 00:47:17.268496 | orchestrator | 2026-01-02 00:47:17 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:47:20.307057 | orchestrator | 2026-01-02 00:47:20 | INFO  | Task f7d8135e-972a-4d2f-a12c-4f270050d7c2 is in state STARTED 2026-01-02 00:47:20.308199 | orchestrator | 2026-01-02 00:47:20 | INFO  | Task da85fc6f-674a-4e6c-9fef-7b9d54fefa9a is in state STARTED 2026-01-02 00:47:20.310410 | orchestrator | 2026-01-02 00:47:20 | INFO  | Task c8f607a5-1f4d-45bc-9b0c-2a7f5ff91896 is in state STARTED 2026-01-02 00:47:20.310525 | orchestrator | 2026-01-02 00:47:20 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:47:20.312777 | orchestrator | 2026-01-02 00:47:20 | INFO  | Task 86403bf5-b644-4b10-a6ba-c4130d65c270 is in state STARTED 2026-01-02 00:47:20.312799 | orchestrator | 2026-01-02 00:47:20 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:47:23.371789 | orchestrator | 2026-01-02 00:47:23 | INFO  | Task f7d8135e-972a-4d2f-a12c-4f270050d7c2 is in state STARTED 2026-01-02 00:47:23.372011 | orchestrator | 2026-01-02 00:47:23 | INFO  | Task da85fc6f-674a-4e6c-9fef-7b9d54fefa9a is in state STARTED 2026-01-02 00:47:23.372962 | orchestrator | 2026-01-02 00:47:23 | INFO  | Task c8f607a5-1f4d-45bc-9b0c-2a7f5ff91896 is in state STARTED 2026-01-02 00:47:23.373913 | orchestrator | 2026-01-02 00:47:23 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:47:23.376759 | orchestrator | 2026-01-02 00:47:23 | INFO  | Task 86403bf5-b644-4b10-a6ba-c4130d65c270 is in state STARTED 2026-01-02 00:47:23.376849 | orchestrator | 2026-01-02 00:47:23 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:47:26.411869 | orchestrator | 2026-01-02 00:47:26 | INFO  | Task f7d8135e-972a-4d2f-a12c-4f270050d7c2 is in state STARTED 2026-01-02 00:47:26.413472 | orchestrator | 2026-01-02 00:47:26 | INFO  | Task da85fc6f-674a-4e6c-9fef-7b9d54fefa9a is in state STARTED 2026-01-02 00:47:26.414067 | orchestrator | 2026-01-02 00:47:26 | INFO  | Task c8f607a5-1f4d-45bc-9b0c-2a7f5ff91896 is in state STARTED 2026-01-02 00:47:26.414819 | orchestrator | 2026-01-02 00:47:26 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:47:26.416506 | orchestrator | 2026-01-02 00:47:26 | INFO  | Task 86403bf5-b644-4b10-a6ba-c4130d65c270 is in state STARTED 2026-01-02 00:47:26.416533 | orchestrator | 2026-01-02 00:47:26 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:47:29.450326 | orchestrator | 2026-01-02 00:47:29 | INFO  | Task f7d8135e-972a-4d2f-a12c-4f270050d7c2 is in state STARTED 2026-01-02 00:47:29.451024 | orchestrator | 2026-01-02 00:47:29.451062 | orchestrator | 2026-01-02 00:47:29.451075 | orchestrator | PLAY [Apply role homer] ******************************************************** 2026-01-02 00:47:29.451087 | orchestrator | 2026-01-02 00:47:29.451098 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2026-01-02 00:47:29.451110 | orchestrator | Friday 02 January 2026 00:46:12 +0000 (0:00:00.499) 0:00:00.499 ******** 2026-01-02 00:47:29.451121 | orchestrator | ok: [testbed-manager] => { 2026-01-02 00:47:29.451135 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2026-01-02 00:47:29.451148 | orchestrator | } 2026-01-02 00:47:29.451159 | orchestrator | 2026-01-02 00:47:29.451170 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2026-01-02 00:47:29.451181 | orchestrator | Friday 02 January 2026 00:46:13 +0000 (0:00:00.195) 0:00:00.695 ******** 2026-01-02 00:47:29.451192 | orchestrator | ok: [testbed-manager] 2026-01-02 00:47:29.451204 | orchestrator | 2026-01-02 00:47:29.451215 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2026-01-02 00:47:29.451226 | orchestrator | Friday 02 January 2026 00:46:14 +0000 (0:00:01.328) 0:00:02.023 ******** 2026-01-02 00:47:29.451237 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2026-01-02 00:47:29.451247 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2026-01-02 00:47:29.451258 | orchestrator | 2026-01-02 00:47:29.451270 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2026-01-02 00:47:29.451280 | orchestrator | Friday 02 January 2026 00:46:15 +0000 (0:00:01.414) 0:00:03.437 ******** 2026-01-02 00:47:29.451291 | orchestrator | changed: [testbed-manager] 2026-01-02 00:47:29.451302 | orchestrator | 2026-01-02 00:47:29.451313 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2026-01-02 00:47:29.451324 | orchestrator | Friday 02 January 2026 00:46:18 +0000 (0:00:02.937) 0:00:06.376 ******** 2026-01-02 00:47:29.451335 | orchestrator | changed: [testbed-manager] 2026-01-02 00:47:29.451346 | orchestrator | 2026-01-02 00:47:29.451357 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2026-01-02 00:47:29.451367 | orchestrator | Friday 02 January 2026 00:46:20 +0000 (0:00:01.683) 0:00:08.059 ******** 2026-01-02 00:47:29.451378 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2026-01-02 00:47:29.451389 | orchestrator | ok: [testbed-manager] 2026-01-02 00:47:29.451400 | orchestrator | 2026-01-02 00:47:29.451411 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2026-01-02 00:47:29.451422 | orchestrator | Friday 02 January 2026 00:46:45 +0000 (0:00:25.117) 0:00:33.176 ******** 2026-01-02 00:47:29.451433 | orchestrator | changed: [testbed-manager] 2026-01-02 00:47:29.451444 | orchestrator | 2026-01-02 00:47:29.451455 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-02 00:47:29.451466 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-02 00:47:29.451478 | orchestrator | 2026-01-02 00:47:29.451489 | orchestrator | 2026-01-02 00:47:29.451500 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-02 00:47:29.451511 | orchestrator | Friday 02 January 2026 00:46:48 +0000 (0:00:03.342) 0:00:36.518 ******** 2026-01-02 00:47:29.451522 | orchestrator | =============================================================================== 2026-01-02 00:47:29.451557 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 25.12s 2026-01-02 00:47:29.451569 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 3.34s 2026-01-02 00:47:29.451625 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 2.94s 2026-01-02 00:47:29.451637 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 1.68s 2026-01-02 00:47:29.451648 | orchestrator | osism.services.homer : Create required directories ---------------------- 1.42s 2026-01-02 00:47:29.451659 | orchestrator | osism.services.homer : Create traefik external network ------------------ 1.33s 2026-01-02 00:47:29.451670 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.20s 2026-01-02 00:47:29.451681 | orchestrator | 2026-01-02 00:47:29.451692 | orchestrator | 2026-01-02 00:47:29.451703 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2026-01-02 00:47:29.451714 | orchestrator | 2026-01-02 00:47:29.451724 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2026-01-02 00:47:29.451735 | orchestrator | Friday 02 January 2026 00:46:11 +0000 (0:00:00.253) 0:00:00.253 ******** 2026-01-02 00:47:29.451746 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2026-01-02 00:47:29.451759 | orchestrator | 2026-01-02 00:47:29.451769 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2026-01-02 00:47:29.451780 | orchestrator | Friday 02 January 2026 00:46:12 +0000 (0:00:00.734) 0:00:00.988 ******** 2026-01-02 00:47:29.451791 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2026-01-02 00:47:29.451802 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2026-01-02 00:47:29.451813 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2026-01-02 00:47:29.451824 | orchestrator | 2026-01-02 00:47:29.451835 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2026-01-02 00:47:29.451846 | orchestrator | Friday 02 January 2026 00:46:14 +0000 (0:00:02.002) 0:00:02.990 ******** 2026-01-02 00:47:29.451857 | orchestrator | changed: [testbed-manager] 2026-01-02 00:47:29.451868 | orchestrator | 2026-01-02 00:47:29.451879 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2026-01-02 00:47:29.451890 | orchestrator | Friday 02 January 2026 00:46:17 +0000 (0:00:02.535) 0:00:05.526 ******** 2026-01-02 00:47:29.451915 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2026-01-02 00:47:29.451927 | orchestrator | ok: [testbed-manager] 2026-01-02 00:47:29.451938 | orchestrator | 2026-01-02 00:47:29.451948 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2026-01-02 00:47:29.451995 | orchestrator | Friday 02 January 2026 00:46:48 +0000 (0:00:31.481) 0:00:37.007 ******** 2026-01-02 00:47:29.452007 | orchestrator | changed: [testbed-manager] 2026-01-02 00:47:29.452018 | orchestrator | 2026-01-02 00:47:29.452029 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2026-01-02 00:47:29.452039 | orchestrator | Friday 02 January 2026 00:46:49 +0000 (0:00:01.451) 0:00:38.459 ******** 2026-01-02 00:47:29.452050 | orchestrator | ok: [testbed-manager] 2026-01-02 00:47:29.452061 | orchestrator | 2026-01-02 00:47:29.452072 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2026-01-02 00:47:29.452132 | orchestrator | Friday 02 January 2026 00:46:50 +0000 (0:00:00.543) 0:00:39.003 ******** 2026-01-02 00:47:29.452143 | orchestrator | changed: [testbed-manager] 2026-01-02 00:47:29.452154 | orchestrator | 2026-01-02 00:47:29.452166 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2026-01-02 00:47:29.452177 | orchestrator | Friday 02 January 2026 00:46:52 +0000 (0:00:01.730) 0:00:40.733 ******** 2026-01-02 00:47:29.452188 | orchestrator | changed: [testbed-manager] 2026-01-02 00:47:29.452199 | orchestrator | 2026-01-02 00:47:29.452210 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2026-01-02 00:47:29.452230 | orchestrator | Friday 02 January 2026 00:46:54 +0000 (0:00:02.428) 0:00:43.162 ******** 2026-01-02 00:47:29.452241 | orchestrator | changed: [testbed-manager] 2026-01-02 00:47:29.452252 | orchestrator | 2026-01-02 00:47:29.452263 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2026-01-02 00:47:29.452274 | orchestrator | Friday 02 January 2026 00:46:56 +0000 (0:00:02.046) 0:00:45.209 ******** 2026-01-02 00:47:29.452285 | orchestrator | ok: [testbed-manager] 2026-01-02 00:47:29.452296 | orchestrator | 2026-01-02 00:47:29.452307 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-02 00:47:29.452318 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-02 00:47:29.452329 | orchestrator | 2026-01-02 00:47:29.452340 | orchestrator | 2026-01-02 00:47:29.452351 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-02 00:47:29.452362 | orchestrator | Friday 02 January 2026 00:46:57 +0000 (0:00:00.459) 0:00:45.668 ******** 2026-01-02 00:47:29.452373 | orchestrator | =============================================================================== 2026-01-02 00:47:29.452389 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 31.48s 2026-01-02 00:47:29.452401 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 2.54s 2026-01-02 00:47:29.452412 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 2.43s 2026-01-02 00:47:29.452423 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 2.05s 2026-01-02 00:47:29.452433 | orchestrator | osism.services.openstackclient : Create required directories ------------ 2.00s 2026-01-02 00:47:29.452444 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 1.73s 2026-01-02 00:47:29.452455 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 1.45s 2026-01-02 00:47:29.452466 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.73s 2026-01-02 00:47:29.452477 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 0.54s 2026-01-02 00:47:29.452488 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.46s 2026-01-02 00:47:29.452499 | orchestrator | 2026-01-02 00:47:29.452510 | orchestrator | 2026-01-02 00:47:29.452521 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2026-01-02 00:47:29.452531 | orchestrator | 2026-01-02 00:47:29.452543 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2026-01-02 00:47:29.452554 | orchestrator | Friday 02 January 2026 00:46:30 +0000 (0:00:00.190) 0:00:00.190 ******** 2026-01-02 00:47:29.452564 | orchestrator | ok: [testbed-manager] 2026-01-02 00:47:29.452575 | orchestrator | 2026-01-02 00:47:29.452586 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2026-01-02 00:47:29.452597 | orchestrator | Friday 02 January 2026 00:46:31 +0000 (0:00:00.733) 0:00:00.924 ******** 2026-01-02 00:47:29.452608 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2026-01-02 00:47:29.452619 | orchestrator | 2026-01-02 00:47:29.452630 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2026-01-02 00:47:29.452641 | orchestrator | Friday 02 January 2026 00:46:32 +0000 (0:00:00.578) 0:00:01.502 ******** 2026-01-02 00:47:29.452652 | orchestrator | changed: [testbed-manager] 2026-01-02 00:47:29.452663 | orchestrator | 2026-01-02 00:47:29.452674 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2026-01-02 00:47:29.452685 | orchestrator | Friday 02 January 2026 00:46:33 +0000 (0:00:01.205) 0:00:02.708 ******** 2026-01-02 00:47:29.452696 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2026-01-02 00:47:29.452707 | orchestrator | ok: [testbed-manager] 2026-01-02 00:47:29.452718 | orchestrator | 2026-01-02 00:47:29.452729 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2026-01-02 00:47:29.452740 | orchestrator | Friday 02 January 2026 00:47:21 +0000 (0:00:48.674) 0:00:51.382 ******** 2026-01-02 00:47:29.452757 | orchestrator | changed: [testbed-manager] 2026-01-02 00:47:29.452768 | orchestrator | 2026-01-02 00:47:29.452779 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-02 00:47:29.452790 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-02 00:47:29.452801 | orchestrator | 2026-01-02 00:47:29.452812 | orchestrator | 2026-01-02 00:47:29.452823 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-02 00:47:29.452840 | orchestrator | Friday 02 January 2026 00:47:28 +0000 (0:00:06.088) 0:00:57.470 ******** 2026-01-02 00:47:29.452852 | orchestrator | =============================================================================== 2026-01-02 00:47:29.452863 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 48.67s 2026-01-02 00:47:29.452873 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ------------------ 6.09s 2026-01-02 00:47:29.452884 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 1.21s 2026-01-02 00:47:29.452895 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 0.73s 2026-01-02 00:47:29.452906 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 0.58s 2026-01-02 00:47:29.452917 | orchestrator | 2026-01-02 00:47:29 | INFO  | Task da85fc6f-674a-4e6c-9fef-7b9d54fefa9a is in state SUCCESS 2026-01-02 00:47:29.453998 | orchestrator | 2026-01-02 00:47:29 | INFO  | Task c8f607a5-1f4d-45bc-9b0c-2a7f5ff91896 is in state STARTED 2026-01-02 00:47:29.454371 | orchestrator | 2026-01-02 00:47:29 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:47:29.458409 | orchestrator | 2026-01-02 00:47:29 | INFO  | Task 86403bf5-b644-4b10-a6ba-c4130d65c270 is in state STARTED 2026-01-02 00:47:29.458441 | orchestrator | 2026-01-02 00:47:29 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:47:32.499947 | orchestrator | 2026-01-02 00:47:32.500079 | orchestrator | 2026-01-02 00:47:32.500091 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-02 00:47:32.500100 | orchestrator | 2026-01-02 00:47:32.500108 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-02 00:47:32.500117 | orchestrator | Friday 02 January 2026 00:46:13 +0000 (0:00:00.519) 0:00:00.519 ******** 2026-01-02 00:47:32.500126 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2026-01-02 00:47:32.500135 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2026-01-02 00:47:32.500143 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2026-01-02 00:47:32.500150 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2026-01-02 00:47:32.500158 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2026-01-02 00:47:32.500166 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2026-01-02 00:47:32.500188 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2026-01-02 00:47:32.500197 | orchestrator | 2026-01-02 00:47:32.500205 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2026-01-02 00:47:32.500213 | orchestrator | 2026-01-02 00:47:32.500242 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2026-01-02 00:47:32.500251 | orchestrator | Friday 02 January 2026 00:46:15 +0000 (0:00:02.017) 0:00:02.537 ******** 2026-01-02 00:47:32.500271 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-02 00:47:32.500281 | orchestrator | 2026-01-02 00:47:32.500289 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2026-01-02 00:47:32.500297 | orchestrator | Friday 02 January 2026 00:46:16 +0000 (0:00:01.355) 0:00:03.893 ******** 2026-01-02 00:47:32.500324 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:47:32.500333 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:47:32.500341 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:47:32.500349 | orchestrator | ok: [testbed-manager] 2026-01-02 00:47:32.500356 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:47:32.500364 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:47:32.500372 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:47:32.500380 | orchestrator | 2026-01-02 00:47:32.500388 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2026-01-02 00:47:32.500396 | orchestrator | Friday 02 January 2026 00:46:18 +0000 (0:00:01.553) 0:00:05.446 ******** 2026-01-02 00:47:32.500403 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:47:32.500411 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:47:32.500516 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:47:32.500529 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:47:32.500537 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:47:32.500545 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:47:32.500553 | orchestrator | ok: [testbed-manager] 2026-01-02 00:47:32.500561 | orchestrator | 2026-01-02 00:47:32.500569 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2026-01-02 00:47:32.500577 | orchestrator | Friday 02 January 2026 00:46:21 +0000 (0:00:03.252) 0:00:08.699 ******** 2026-01-02 00:47:32.500585 | orchestrator | changed: [testbed-manager] 2026-01-02 00:47:32.500593 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:47:32.500601 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:47:32.500609 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:47:32.500616 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:47:32.500624 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:47:32.500632 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:47:32.500640 | orchestrator | 2026-01-02 00:47:32.500648 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2026-01-02 00:47:32.500656 | orchestrator | Friday 02 January 2026 00:46:24 +0000 (0:00:02.747) 0:00:11.447 ******** 2026-01-02 00:47:32.500663 | orchestrator | changed: [testbed-manager] 2026-01-02 00:47:32.500671 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:47:32.500679 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:47:32.500687 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:47:32.500694 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:47:32.500702 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:47:32.500710 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:47:32.500717 | orchestrator | 2026-01-02 00:47:32.500725 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2026-01-02 00:47:32.500733 | orchestrator | Friday 02 January 2026 00:46:35 +0000 (0:00:11.175) 0:00:22.623 ******** 2026-01-02 00:47:32.500741 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:47:32.500750 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:47:32.500762 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:47:32.500775 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:47:32.500788 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:47:32.500801 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:47:32.500855 | orchestrator | changed: [testbed-manager] 2026-01-02 00:47:32.500878 | orchestrator | 2026-01-02 00:47:32.500898 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2026-01-02 00:47:32.500907 | orchestrator | Friday 02 January 2026 00:47:13 +0000 (0:00:38.619) 0:01:01.242 ******** 2026-01-02 00:47:32.500916 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-02 00:47:32.500925 | orchestrator | 2026-01-02 00:47:32.500933 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2026-01-02 00:47:32.500942 | orchestrator | Friday 02 January 2026 00:47:15 +0000 (0:00:01.263) 0:01:02.506 ******** 2026-01-02 00:47:32.500950 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2026-01-02 00:47:32.500967 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2026-01-02 00:47:32.501012 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2026-01-02 00:47:32.501022 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2026-01-02 00:47:32.501046 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2026-01-02 00:47:32.501055 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2026-01-02 00:47:32.501063 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2026-01-02 00:47:32.501071 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2026-01-02 00:47:32.501079 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2026-01-02 00:47:32.501086 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2026-01-02 00:47:32.501094 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2026-01-02 00:47:32.501102 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2026-01-02 00:47:32.501110 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2026-01-02 00:47:32.501118 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2026-01-02 00:47:32.501125 | orchestrator | 2026-01-02 00:47:32.501133 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2026-01-02 00:47:32.501149 | orchestrator | Friday 02 January 2026 00:47:19 +0000 (0:00:04.033) 0:01:06.539 ******** 2026-01-02 00:47:32.501158 | orchestrator | ok: [testbed-manager] 2026-01-02 00:47:32.501167 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:47:32.501176 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:47:32.501185 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:47:32.501194 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:47:32.501204 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:47:32.501212 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:47:32.501221 | orchestrator | 2026-01-02 00:47:32.501230 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2026-01-02 00:47:32.501239 | orchestrator | Friday 02 January 2026 00:47:20 +0000 (0:00:01.023) 0:01:07.563 ******** 2026-01-02 00:47:32.501248 | orchestrator | changed: [testbed-manager] 2026-01-02 00:47:32.501257 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:47:32.501267 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:47:32.501276 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:47:32.501285 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:47:32.501294 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:47:32.501303 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:47:32.501312 | orchestrator | 2026-01-02 00:47:32.501320 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2026-01-02 00:47:32.501329 | orchestrator | Friday 02 January 2026 00:47:21 +0000 (0:00:01.389) 0:01:08.952 ******** 2026-01-02 00:47:32.501338 | orchestrator | ok: [testbed-manager] 2026-01-02 00:47:32.501347 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:47:32.501356 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:47:32.501365 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:47:32.501374 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:47:32.501383 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:47:32.501392 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:47:32.501400 | orchestrator | 2026-01-02 00:47:32.501410 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2026-01-02 00:47:32.501419 | orchestrator | Friday 02 January 2026 00:47:22 +0000 (0:00:01.105) 0:01:10.057 ******** 2026-01-02 00:47:32.501428 | orchestrator | ok: [testbed-manager] 2026-01-02 00:47:32.501437 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:47:32.501446 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:47:32.501455 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:47:32.501464 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:47:32.501473 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:47:32.501483 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:47:32.501491 | orchestrator | 2026-01-02 00:47:32.501499 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2026-01-02 00:47:32.501507 | orchestrator | Friday 02 January 2026 00:47:24 +0000 (0:00:01.680) 0:01:11.738 ******** 2026-01-02 00:47:32.501520 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2026-01-02 00:47:32.501532 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-02 00:47:32.501540 | orchestrator | 2026-01-02 00:47:32.501548 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2026-01-02 00:47:32.501556 | orchestrator | Friday 02 January 2026 00:47:25 +0000 (0:00:01.172) 0:01:12.911 ******** 2026-01-02 00:47:32.501564 | orchestrator | changed: [testbed-manager] 2026-01-02 00:47:32.501572 | orchestrator | 2026-01-02 00:47:32.501580 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2026-01-02 00:47:32.501588 | orchestrator | Friday 02 January 2026 00:47:27 +0000 (0:00:01.791) 0:01:14.703 ******** 2026-01-02 00:47:32.501596 | orchestrator | changed: [testbed-manager] 2026-01-02 00:47:32.501604 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:47:32.501611 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:47:32.501619 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:47:32.501627 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:47:32.501635 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:47:32.501643 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:47:32.501651 | orchestrator | 2026-01-02 00:47:32.501659 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-02 00:47:32.501667 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-02 00:47:32.501675 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-02 00:47:32.501683 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-02 00:47:32.501691 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-02 00:47:32.501705 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-02 00:47:32.501713 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-02 00:47:32.501721 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-02 00:47:32.501729 | orchestrator | 2026-01-02 00:47:32.501737 | orchestrator | 2026-01-02 00:47:32.501745 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-02 00:47:32.501753 | orchestrator | Friday 02 January 2026 00:47:30 +0000 (0:00:03.221) 0:01:17.924 ******** 2026-01-02 00:47:32.501761 | orchestrator | =============================================================================== 2026-01-02 00:47:32.501769 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 38.62s 2026-01-02 00:47:32.501781 | orchestrator | osism.services.netdata : Add repository -------------------------------- 11.18s 2026-01-02 00:47:32.501789 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 4.03s 2026-01-02 00:47:32.501797 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 3.25s 2026-01-02 00:47:32.501805 | orchestrator | osism.services.netdata : Restart service netdata ------------------------ 3.22s 2026-01-02 00:47:32.501813 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 2.75s 2026-01-02 00:47:32.501821 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.02s 2026-01-02 00:47:32.501835 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 1.79s 2026-01-02 00:47:32.501843 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 1.68s 2026-01-02 00:47:32.501868 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 1.55s 2026-01-02 00:47:32.501893 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 1.39s 2026-01-02 00:47:32.501922 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 1.36s 2026-01-02 00:47:32.501931 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.26s 2026-01-02 00:47:32.501939 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.17s 2026-01-02 00:47:32.501947 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 1.11s 2026-01-02 00:47:32.501955 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.02s 2026-01-02 00:47:32.501963 | orchestrator | 2026-01-02 00:47:32 | INFO  | Task f7d8135e-972a-4d2f-a12c-4f270050d7c2 is in state SUCCESS 2026-01-02 00:47:32.501971 | orchestrator | 2026-01-02 00:47:32 | INFO  | Task c8f607a5-1f4d-45bc-9b0c-2a7f5ff91896 is in state STARTED 2026-01-02 00:47:32.502514 | orchestrator | 2026-01-02 00:47:32 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:47:32.505115 | orchestrator | 2026-01-02 00:47:32 | INFO  | Task 86403bf5-b644-4b10-a6ba-c4130d65c270 is in state STARTED 2026-01-02 00:47:32.505360 | orchestrator | 2026-01-02 00:47:32 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:47:35.546049 | orchestrator | 2026-01-02 00:47:35 | INFO  | Task c8f607a5-1f4d-45bc-9b0c-2a7f5ff91896 is in state STARTED 2026-01-02 00:47:35.547315 | orchestrator | 2026-01-02 00:47:35 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:47:35.551066 | orchestrator | 2026-01-02 00:47:35 | INFO  | Task 86403bf5-b644-4b10-a6ba-c4130d65c270 is in state STARTED 2026-01-02 00:47:35.551105 | orchestrator | 2026-01-02 00:47:35 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:47:38.589888 | orchestrator | 2026-01-02 00:47:38 | INFO  | Task c8f607a5-1f4d-45bc-9b0c-2a7f5ff91896 is in state STARTED 2026-01-02 00:47:38.590543 | orchestrator | 2026-01-02 00:47:38 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:47:38.591778 | orchestrator | 2026-01-02 00:47:38 | INFO  | Task 86403bf5-b644-4b10-a6ba-c4130d65c270 is in state STARTED 2026-01-02 00:47:38.591827 | orchestrator | 2026-01-02 00:47:38 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:47:41.653183 | orchestrator | 2026-01-02 00:47:41 | INFO  | Task c8f607a5-1f4d-45bc-9b0c-2a7f5ff91896 is in state STARTED 2026-01-02 00:47:41.654452 | orchestrator | 2026-01-02 00:47:41 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:47:41.655594 | orchestrator | 2026-01-02 00:47:41 | INFO  | Task 86403bf5-b644-4b10-a6ba-c4130d65c270 is in state STARTED 2026-01-02 00:47:41.656005 | orchestrator | 2026-01-02 00:47:41 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:47:44.690600 | orchestrator | 2026-01-02 00:47:44 | INFO  | Task c8f607a5-1f4d-45bc-9b0c-2a7f5ff91896 is in state STARTED 2026-01-02 00:47:44.691493 | orchestrator | 2026-01-02 00:47:44 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:47:44.692181 | orchestrator | 2026-01-02 00:47:44 | INFO  | Task 86403bf5-b644-4b10-a6ba-c4130d65c270 is in state STARTED 2026-01-02 00:47:44.692236 | orchestrator | 2026-01-02 00:47:44 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:47:47.730472 | orchestrator | 2026-01-02 00:47:47 | INFO  | Task c8f607a5-1f4d-45bc-9b0c-2a7f5ff91896 is in state STARTED 2026-01-02 00:47:47.731114 | orchestrator | 2026-01-02 00:47:47 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:47:47.732437 | orchestrator | 2026-01-02 00:47:47 | INFO  | Task 86403bf5-b644-4b10-a6ba-c4130d65c270 is in state STARTED 2026-01-02 00:47:47.732563 | orchestrator | 2026-01-02 00:47:47 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:47:50.770860 | orchestrator | 2026-01-02 00:47:50 | INFO  | Task c8f607a5-1f4d-45bc-9b0c-2a7f5ff91896 is in state STARTED 2026-01-02 00:47:50.772233 | orchestrator | 2026-01-02 00:47:50 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:47:50.773325 | orchestrator | 2026-01-02 00:47:50 | INFO  | Task 86403bf5-b644-4b10-a6ba-c4130d65c270 is in state STARTED 2026-01-02 00:47:50.773356 | orchestrator | 2026-01-02 00:47:50 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:47:53.815071 | orchestrator | 2026-01-02 00:47:53 | INFO  | Task c8f607a5-1f4d-45bc-9b0c-2a7f5ff91896 is in state STARTED 2026-01-02 00:47:53.815366 | orchestrator | 2026-01-02 00:47:53 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:47:53.816464 | orchestrator | 2026-01-02 00:47:53 | INFO  | Task 86403bf5-b644-4b10-a6ba-c4130d65c270 is in state STARTED 2026-01-02 00:47:53.816491 | orchestrator | 2026-01-02 00:47:53 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:47:56.855662 | orchestrator | 2026-01-02 00:47:56 | INFO  | Task c8f607a5-1f4d-45bc-9b0c-2a7f5ff91896 is in state STARTED 2026-01-02 00:47:56.856511 | orchestrator | 2026-01-02 00:47:56 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:47:56.858118 | orchestrator | 2026-01-02 00:47:56 | INFO  | Task 86403bf5-b644-4b10-a6ba-c4130d65c270 is in state STARTED 2026-01-02 00:47:56.858629 | orchestrator | 2026-01-02 00:47:56 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:47:59.894447 | orchestrator | 2026-01-02 00:47:59 | INFO  | Task c8f607a5-1f4d-45bc-9b0c-2a7f5ff91896 is in state STARTED 2026-01-02 00:47:59.896473 | orchestrator | 2026-01-02 00:47:59 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:47:59.897831 | orchestrator | 2026-01-02 00:47:59 | INFO  | Task 86403bf5-b644-4b10-a6ba-c4130d65c270 is in state STARTED 2026-01-02 00:47:59.897876 | orchestrator | 2026-01-02 00:47:59 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:48:02.940948 | orchestrator | 2026-01-02 00:48:02 | INFO  | Task c8f607a5-1f4d-45bc-9b0c-2a7f5ff91896 is in state STARTED 2026-01-02 00:48:02.942789 | orchestrator | 2026-01-02 00:48:02 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:48:02.942852 | orchestrator | 2026-01-02 00:48:02 | INFO  | Task 86403bf5-b644-4b10-a6ba-c4130d65c270 is in state STARTED 2026-01-02 00:48:02.942860 | orchestrator | 2026-01-02 00:48:02 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:48:05.979939 | orchestrator | 2026-01-02 00:48:05 | INFO  | Task c8f607a5-1f4d-45bc-9b0c-2a7f5ff91896 is in state STARTED 2026-01-02 00:48:05.982312 | orchestrator | 2026-01-02 00:48:05 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:48:05.984700 | orchestrator | 2026-01-02 00:48:05 | INFO  | Task 86403bf5-b644-4b10-a6ba-c4130d65c270 is in state STARTED 2026-01-02 00:48:05.985201 | orchestrator | 2026-01-02 00:48:05 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:48:09.027865 | orchestrator | 2026-01-02 00:48:09 | INFO  | Task c8f607a5-1f4d-45bc-9b0c-2a7f5ff91896 is in state STARTED 2026-01-02 00:48:09.028935 | orchestrator | 2026-01-02 00:48:09 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:48:09.030274 | orchestrator | 2026-01-02 00:48:09 | INFO  | Task 86403bf5-b644-4b10-a6ba-c4130d65c270 is in state STARTED 2026-01-02 00:48:09.030341 | orchestrator | 2026-01-02 00:48:09 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:48:12.066776 | orchestrator | 2026-01-02 00:48:12 | INFO  | Task c8f607a5-1f4d-45bc-9b0c-2a7f5ff91896 is in state STARTED 2026-01-02 00:48:12.070261 | orchestrator | 2026-01-02 00:48:12 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:48:12.072155 | orchestrator | 2026-01-02 00:48:12 | INFO  | Task 86403bf5-b644-4b10-a6ba-c4130d65c270 is in state STARTED 2026-01-02 00:48:12.072270 | orchestrator | 2026-01-02 00:48:12 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:48:15.110939 | orchestrator | 2026-01-02 00:48:15 | INFO  | Task c8f607a5-1f4d-45bc-9b0c-2a7f5ff91896 is in state STARTED 2026-01-02 00:48:15.111093 | orchestrator | 2026-01-02 00:48:15 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:48:15.112033 | orchestrator | 2026-01-02 00:48:15 | INFO  | Task 86403bf5-b644-4b10-a6ba-c4130d65c270 is in state STARTED 2026-01-02 00:48:15.112084 | orchestrator | 2026-01-02 00:48:15 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:48:18.150740 | orchestrator | 2026-01-02 00:48:18 | INFO  | Task c8f607a5-1f4d-45bc-9b0c-2a7f5ff91896 is in state STARTED 2026-01-02 00:48:18.150821 | orchestrator | 2026-01-02 00:48:18 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:48:18.150833 | orchestrator | 2026-01-02 00:48:18 | INFO  | Task 86403bf5-b644-4b10-a6ba-c4130d65c270 is in state STARTED 2026-01-02 00:48:18.150844 | orchestrator | 2026-01-02 00:48:18 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:48:21.189498 | orchestrator | 2026-01-02 00:48:21 | INFO  | Task c8f607a5-1f4d-45bc-9b0c-2a7f5ff91896 is in state STARTED 2026-01-02 00:48:21.189584 | orchestrator | 2026-01-02 00:48:21 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:48:21.189864 | orchestrator | 2026-01-02 00:48:21 | INFO  | Task 86403bf5-b644-4b10-a6ba-c4130d65c270 is in state STARTED 2026-01-02 00:48:21.189884 | orchestrator | 2026-01-02 00:48:21 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:48:24.221371 | orchestrator | 2026-01-02 00:48:24 | INFO  | Task e578a9d3-167c-49b4-8c13-0d9896ae5f33 is in state STARTED 2026-01-02 00:48:24.223235 | orchestrator | 2026-01-02 00:48:24 | INFO  | Task c8f607a5-1f4d-45bc-9b0c-2a7f5ff91896 is in state STARTED 2026-01-02 00:48:24.223578 | orchestrator | 2026-01-02 00:48:24 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:48:24.224526 | orchestrator | 2026-01-02 00:48:24 | INFO  | Task ae7cb1e4-503c-476f-9517-35d3597afcab is in state STARTED 2026-01-02 00:48:24.230345 | orchestrator | 2026-01-02 00:48:24.230426 | orchestrator | 2026-01-02 00:48:24 | INFO  | Task 86403bf5-b644-4b10-a6ba-c4130d65c270 is in state SUCCESS 2026-01-02 00:48:24.233855 | orchestrator | 2026-01-02 00:48:24.233912 | orchestrator | PLAY [Apply role common] ******************************************************* 2026-01-02 00:48:24.233924 | orchestrator | 2026-01-02 00:48:24.233944 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-01-02 00:48:24.233979 | orchestrator | Friday 02 January 2026 00:46:05 +0000 (0:00:00.206) 0:00:00.206 ******** 2026-01-02 00:48:24.233991 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-02 00:48:24.234003 | orchestrator | 2026-01-02 00:48:24.234012 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2026-01-02 00:48:24.234107 | orchestrator | Friday 02 January 2026 00:46:06 +0000 (0:00:01.073) 0:00:01.279 ******** 2026-01-02 00:48:24.234118 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2026-01-02 00:48:24.234128 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2026-01-02 00:48:24.234138 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-01-02 00:48:24.234148 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2026-01-02 00:48:24.234158 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2026-01-02 00:48:24.234168 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-01-02 00:48:24.234177 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-01-02 00:48:24.234189 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-01-02 00:48:24.234198 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-01-02 00:48:24.234208 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2026-01-02 00:48:24.234217 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-01-02 00:48:24.234227 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2026-01-02 00:48:24.234237 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-01-02 00:48:24.234247 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-01-02 00:48:24.234257 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-01-02 00:48:24.234266 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2026-01-02 00:48:24.234276 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-01-02 00:48:24.234285 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-01-02 00:48:24.234338 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-01-02 00:48:24.234351 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-01-02 00:48:24.234362 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-01-02 00:48:24.234373 | orchestrator | 2026-01-02 00:48:24.234384 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-01-02 00:48:24.234401 | orchestrator | Friday 02 January 2026 00:46:10 +0000 (0:00:03.897) 0:00:05.177 ******** 2026-01-02 00:48:24.234413 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-02 00:48:24.234425 | orchestrator | 2026-01-02 00:48:24.234437 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2026-01-02 00:48:24.234448 | orchestrator | Friday 02 January 2026 00:46:11 +0000 (0:00:01.214) 0:00:06.391 ******** 2026-01-02 00:48:24.234463 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-02 00:48:24.234480 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-02 00:48:24.234546 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-02 00:48:24.234562 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-02 00:48:24.234574 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-02 00:48:24.234587 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:48:24.234600 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-02 00:48:24.234698 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:48:24.234714 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:48:24.234766 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-02 00:48:24.234778 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:48:24.234789 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:48:24.234800 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:48:24.234822 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:48:24.234837 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:48:24.234847 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:48:24.234868 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:48:24.234907 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:48:24.234919 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:48:24.234929 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:48:24.234939 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:48:24.234949 | orchestrator | 2026-01-02 00:48:24.235025 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2026-01-02 00:48:24.235035 | orchestrator | Friday 02 January 2026 00:46:17 +0000 (0:00:05.737) 0:00:12.128 ******** 2026-01-02 00:48:24.235046 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-02 00:48:24.235062 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 00:48:24.235073 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-02 00:48:24.235091 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 00:48:24.235138 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-02 00:48:24.235151 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-02 00:48:24.235162 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 00:48:24.235172 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:48:24.235183 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 00:48:24.235193 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 00:48:24.235208 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 00:48:24.235224 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 00:48:24.235235 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:48:24.235245 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 00:48:24.235289 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-02 00:48:24.235301 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 00:48:24.235311 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 00:48:24.235321 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:48:24.235331 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:48:24.235343 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:48:24.235355 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-02 00:48:24.235371 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 00:48:24.235390 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-02 00:48:24.235402 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 00:48:24.235413 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:48:24.235454 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 00:48:24.235467 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 00:48:24.235479 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:48:24.235490 | orchestrator | 2026-01-02 00:48:24.235501 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2026-01-02 00:48:24.235562 | orchestrator | Friday 02 January 2026 00:46:20 +0000 (0:00:02.843) 0:00:14.972 ******** 2026-01-02 00:48:24.235574 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-02 00:48:24.235587 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-02 00:48:24.235599 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 00:48:24.235621 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 00:48:24.235634 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-02 00:48:24.235645 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 00:48:24.235656 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:48:24.235706 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 00:48:24.235720 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 00:48:24.235730 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:48:24.235740 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 00:48:24.235750 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-02 00:48:24.235767 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:48:24.235777 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-02 00:48:24.235792 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 00:48:24.235803 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-02 00:48:24.235819 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 00:48:24.235829 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-02 00:48:24.235840 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 00:48:24.235850 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 00:48:24.235867 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:48:24.235877 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 00:48:24.235892 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 00:48:24.235902 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 00:48:24.235912 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:48:24.235922 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:48:24.235932 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 00:48:24.235942 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:48:24.235952 | orchestrator | 2026-01-02 00:48:24.235983 | orchestrator | TASK [common : Ensure /var/log/journal exists on EL10 systems] ***************** 2026-01-02 00:48:24.235994 | orchestrator | Friday 02 January 2026 00:46:25 +0000 (0:00:05.240) 0:00:20.212 ******** 2026-01-02 00:48:24.236003 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:48:24.236013 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:48:24.236023 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:48:24.236033 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:48:24.236043 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:48:24.236060 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:48:24.236071 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:48:24.236081 | orchestrator | 2026-01-02 00:48:24.236090 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2026-01-02 00:48:24.236100 | orchestrator | Friday 02 January 2026 00:46:26 +0000 (0:00:00.992) 0:00:21.206 ******** 2026-01-02 00:48:24.236110 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:48:24.236120 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:48:24.236130 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:48:24.236139 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:48:24.236149 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:48:24.236159 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:48:24.236168 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:48:24.236178 | orchestrator | 2026-01-02 00:48:24.236188 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2026-01-02 00:48:24.236198 | orchestrator | Friday 02 January 2026 00:46:28 +0000 (0:00:01.262) 0:00:22.469 ******** 2026-01-02 00:48:24.236213 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:48:24.236223 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:48:24.236233 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:48:24.236242 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:48:24.236252 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:48:24.236360 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:48:24.236374 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:48:24.236384 | orchestrator | 2026-01-02 00:48:24.236394 | orchestrator | TASK [common : Copying over kolla.target] ************************************** 2026-01-02 00:48:24.236404 | orchestrator | Friday 02 January 2026 00:46:28 +0000 (0:00:00.646) 0:00:23.115 ******** 2026-01-02 00:48:24.236414 | orchestrator | changed: [testbed-manager] 2026-01-02 00:48:24.236423 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:48:24.236433 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:48:24.236442 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:48:24.236451 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:48:24.236461 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:48:24.236470 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:48:24.236480 | orchestrator | 2026-01-02 00:48:24.236490 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2026-01-02 00:48:24.236499 | orchestrator | Friday 02 January 2026 00:46:31 +0000 (0:00:02.426) 0:00:25.541 ******** 2026-01-02 00:48:24.236510 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-02 00:48:24.236521 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-02 00:48:24.236537 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-02 00:48:24.236547 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-02 00:48:24.236558 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-02 00:48:24.236587 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:48:24.236598 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-02 00:48:24.236608 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:48:24.236618 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-02 00:48:24.236633 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:48:24.236644 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:48:24.236654 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:48:24.236675 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:48:24.236687 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:48:24.236697 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:48:24.236707 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:48:24.236717 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:48:24.236731 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:48:24.236742 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:48:24.236752 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:48:24.236780 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:48:24.236790 | orchestrator | 2026-01-02 00:48:24.236800 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2026-01-02 00:48:24.236810 | orchestrator | Friday 02 January 2026 00:46:35 +0000 (0:00:04.025) 0:00:29.567 ******** 2026-01-02 00:48:24.236820 | orchestrator | [WARNING]: Skipped 2026-01-02 00:48:24.236832 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2026-01-02 00:48:24.236842 | orchestrator | to this access issue: 2026-01-02 00:48:24.236852 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2026-01-02 00:48:24.236862 | orchestrator | directory 2026-01-02 00:48:24.236871 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-02 00:48:24.236881 | orchestrator | 2026-01-02 00:48:24.236891 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2026-01-02 00:48:24.236900 | orchestrator | Friday 02 January 2026 00:46:36 +0000 (0:00:01.107) 0:00:30.675 ******** 2026-01-02 00:48:24.236910 | orchestrator | [WARNING]: Skipped 2026-01-02 00:48:24.236919 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2026-01-02 00:48:24.236929 | orchestrator | to this access issue: 2026-01-02 00:48:24.236939 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2026-01-02 00:48:24.236948 | orchestrator | directory 2026-01-02 00:48:24.236980 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-02 00:48:24.236990 | orchestrator | 2026-01-02 00:48:24.237001 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2026-01-02 00:48:24.237012 | orchestrator | Friday 02 January 2026 00:46:37 +0000 (0:00:00.892) 0:00:31.568 ******** 2026-01-02 00:48:24.237023 | orchestrator | [WARNING]: Skipped 2026-01-02 00:48:24.237035 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2026-01-02 00:48:24.237046 | orchestrator | to this access issue: 2026-01-02 00:48:24.237057 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2026-01-02 00:48:24.237068 | orchestrator | directory 2026-01-02 00:48:24.237080 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-02 00:48:24.237091 | orchestrator | 2026-01-02 00:48:24.237103 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2026-01-02 00:48:24.237114 | orchestrator | Friday 02 January 2026 00:46:37 +0000 (0:00:00.771) 0:00:32.339 ******** 2026-01-02 00:48:24.237125 | orchestrator | [WARNING]: Skipped 2026-01-02 00:48:24.237137 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2026-01-02 00:48:24.237148 | orchestrator | to this access issue: 2026-01-02 00:48:24.237158 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2026-01-02 00:48:24.237168 | orchestrator | directory 2026-01-02 00:48:24.237177 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-02 00:48:24.237187 | orchestrator | 2026-01-02 00:48:24.237197 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2026-01-02 00:48:24.237207 | orchestrator | Friday 02 January 2026 00:46:38 +0000 (0:00:00.811) 0:00:33.151 ******** 2026-01-02 00:48:24.237217 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:48:24.237226 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:48:24.237236 | orchestrator | changed: [testbed-manager] 2026-01-02 00:48:24.237246 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:48:24.237261 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:48:24.237271 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:48:24.237281 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:48:24.237291 | orchestrator | 2026-01-02 00:48:24.237300 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2026-01-02 00:48:24.237310 | orchestrator | Friday 02 January 2026 00:46:44 +0000 (0:00:05.303) 0:00:38.455 ******** 2026-01-02 00:48:24.237325 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-01-02 00:48:24.237335 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-01-02 00:48:24.237345 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-01-02 00:48:24.237355 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-01-02 00:48:24.237364 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-01-02 00:48:24.237374 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-01-02 00:48:24.237384 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-01-02 00:48:24.237393 | orchestrator | 2026-01-02 00:48:24.237403 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2026-01-02 00:48:24.237413 | orchestrator | Friday 02 January 2026 00:46:47 +0000 (0:00:03.386) 0:00:41.842 ******** 2026-01-02 00:48:24.237423 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:48:24.237433 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:48:24.237442 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:48:24.237452 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:48:24.237462 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:48:24.237471 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:48:24.237481 | orchestrator | changed: [testbed-manager] 2026-01-02 00:48:24.237491 | orchestrator | 2026-01-02 00:48:24.237500 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2026-01-02 00:48:24.237510 | orchestrator | Friday 02 January 2026 00:46:49 +0000 (0:00:02.497) 0:00:44.340 ******** 2026-01-02 00:48:24.237530 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-02 00:48:24.237541 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 00:48:24.237551 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-02 00:48:24.237569 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 00:48:24.237579 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-02 00:48:24.237594 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 00:48:24.237605 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:48:24.237621 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-02 00:48:24.237632 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 00:48:24.237643 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:48:24.237653 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:48:24.237669 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:48:24.237679 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-02 00:48:24.237690 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 00:48:24.237700 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-02 00:48:24.237721 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 00:48:24.237733 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:48:24.237743 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:48:24.237753 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-02 00:48:24.237769 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 00:48:24.237780 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:48:24.237790 | orchestrator | 2026-01-02 00:48:24.237800 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2026-01-02 00:48:24.237809 | orchestrator | Friday 02 January 2026 00:46:52 +0000 (0:00:02.572) 0:00:46.912 ******** 2026-01-02 00:48:24.237824 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-01-02 00:48:24.237834 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-01-02 00:48:24.237844 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-01-02 00:48:24.237853 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-01-02 00:48:24.237863 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-01-02 00:48:24.237872 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-01-02 00:48:24.237882 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-01-02 00:48:24.237892 | orchestrator | 2026-01-02 00:48:24.237901 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2026-01-02 00:48:24.237911 | orchestrator | Friday 02 January 2026 00:46:55 +0000 (0:00:02.894) 0:00:49.807 ******** 2026-01-02 00:48:24.237921 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-01-02 00:48:24.237930 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-01-02 00:48:24.237940 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-01-02 00:48:24.237950 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-01-02 00:48:24.238012 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-01-02 00:48:24.238059 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-01-02 00:48:24.238070 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-01-02 00:48:24.238079 | orchestrator | 2026-01-02 00:48:24.238097 | orchestrator | TASK [service-check-containers : common | Check containers] ******************** 2026-01-02 00:48:24.238107 | orchestrator | Friday 02 January 2026 00:46:57 +0000 (0:00:02.260) 0:00:52.067 ******** 2026-01-02 00:48:24.238118 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-02 00:48:24.238137 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-02 00:48:24.238147 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-02 00:48:24.238158 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-02 00:48:24.238173 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-02 00:48:24.238184 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:48:24.238194 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-02 00:48:24.238217 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:48:24.238238 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-02 00:48:24.238249 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:48:24.238259 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:48:24.238269 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:48:24.238299 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:48:24.238310 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:48:24.238326 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:48:24.238343 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:48:24.238354 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:48:24.238364 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:48:24.238374 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:48:24.238384 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:48:24.238399 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:48:24.238409 | orchestrator | 2026-01-02 00:48:24.238419 | orchestrator | TASK [service-check-containers : common | Notify handlers to restart containers] *** 2026-01-02 00:48:24.238429 | orchestrator | Friday 02 January 2026 00:47:00 +0000 (0:00:03.219) 0:00:55.287 ******** 2026-01-02 00:48:24.238439 | orchestrator | changed: [testbed-manager] => { 2026-01-02 00:48:24.238449 | orchestrator |  "msg": "Notifying handlers" 2026-01-02 00:48:24.238459 | orchestrator | } 2026-01-02 00:48:24.238468 | orchestrator | changed: [testbed-node-0] => { 2026-01-02 00:48:24.238478 | orchestrator |  "msg": "Notifying handlers" 2026-01-02 00:48:24.238488 | orchestrator | } 2026-01-02 00:48:24.238498 | orchestrator | changed: [testbed-node-1] => { 2026-01-02 00:48:24.238508 | orchestrator |  "msg": "Notifying handlers" 2026-01-02 00:48:24.238517 | orchestrator | } 2026-01-02 00:48:24.238525 | orchestrator | changed: [testbed-node-2] => { 2026-01-02 00:48:24.238533 | orchestrator |  "msg": "Notifying handlers" 2026-01-02 00:48:24.238546 | orchestrator | } 2026-01-02 00:48:24.238554 | orchestrator | changed: [testbed-node-3] => { 2026-01-02 00:48:24.238562 | orchestrator |  "msg": "Notifying handlers" 2026-01-02 00:48:24.238570 | orchestrator | } 2026-01-02 00:48:24.238578 | orchestrator | changed: [testbed-node-4] => { 2026-01-02 00:48:24.238585 | orchestrator |  "msg": "Notifying handlers" 2026-01-02 00:48:24.238593 | orchestrator | } 2026-01-02 00:48:24.238601 | orchestrator | changed: [testbed-node-5] => { 2026-01-02 00:48:24.238609 | orchestrator |  "msg": "Notifying handlers" 2026-01-02 00:48:24.238617 | orchestrator | } 2026-01-02 00:48:24.238625 | orchestrator | 2026-01-02 00:48:24.238633 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-01-02 00:48:24.238641 | orchestrator | Friday 02 January 2026 00:47:01 +0000 (0:00:00.774) 0:00:56.061 ******** 2026-01-02 00:48:24.238655 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-02 00:48:24.238664 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 00:48:24.238673 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 00:48:24.238681 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-02 00:48:24.238690 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 00:48:24.238702 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 00:48:24.238718 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-02 00:48:24.238727 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 00:48:24.238739 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 00:48:24.238748 | orchestrator | skipping: [testbed-manager] 2026-01-02 00:48:24.238756 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:48:24.238765 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-02 00:48:24.238773 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 00:48:24.238782 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 00:48:24.238790 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:48:24.238798 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-02 00:48:24.238816 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 00:48:24.238824 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 00:48:24.238833 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:48:24.238841 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:48:24.238853 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-02 00:48:24.238862 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 00:48:24.238870 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 00:48:24.238878 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:48:24.238886 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-02 00:48:24.238895 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 00:48:24.238913 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 00:48:24.238921 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:48:24.238929 | orchestrator | 2026-01-02 00:48:24.238937 | orchestrator | TASK [common : Creating log volume] ******************************************** 2026-01-02 00:48:24.238946 | orchestrator | Friday 02 January 2026 00:47:03 +0000 (0:00:01.514) 0:00:57.576 ******** 2026-01-02 00:48:24.238969 | orchestrator | changed: [testbed-manager] 2026-01-02 00:48:24.238977 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:48:24.238985 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:48:24.238993 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:48:24.239001 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:48:24.239009 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:48:24.239016 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:48:24.239024 | orchestrator | 2026-01-02 00:48:24.239032 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2026-01-02 00:48:24.239040 | orchestrator | Friday 02 January 2026 00:47:04 +0000 (0:00:01.349) 0:00:58.925 ******** 2026-01-02 00:48:24.239048 | orchestrator | changed: [testbed-manager] 2026-01-02 00:48:24.239056 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:48:24.239064 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:48:24.239072 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:48:24.239079 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:48:24.239087 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:48:24.239095 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:48:24.239103 | orchestrator | 2026-01-02 00:48:24.239111 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-01-02 00:48:24.239119 | orchestrator | Friday 02 January 2026 00:47:05 +0000 (0:00:01.163) 0:01:00.089 ******** 2026-01-02 00:48:24.239127 | orchestrator | 2026-01-02 00:48:24.239135 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-01-02 00:48:24.239143 | orchestrator | Friday 02 January 2026 00:47:05 +0000 (0:00:00.062) 0:01:00.151 ******** 2026-01-02 00:48:24.239151 | orchestrator | 2026-01-02 00:48:24.239159 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-01-02 00:48:24.239167 | orchestrator | Friday 02 January 2026 00:47:05 +0000 (0:00:00.058) 0:01:00.209 ******** 2026-01-02 00:48:24.239175 | orchestrator | 2026-01-02 00:48:24.239187 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-01-02 00:48:24.239195 | orchestrator | Friday 02 January 2026 00:47:05 +0000 (0:00:00.057) 0:01:00.267 ******** 2026-01-02 00:48:24.239203 | orchestrator | 2026-01-02 00:48:24.239211 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-01-02 00:48:24.239219 | orchestrator | Friday 02 January 2026 00:47:05 +0000 (0:00:00.159) 0:01:00.427 ******** 2026-01-02 00:48:24.239227 | orchestrator | 2026-01-02 00:48:24.239235 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-01-02 00:48:24.239243 | orchestrator | Friday 02 January 2026 00:47:06 +0000 (0:00:00.058) 0:01:00.485 ******** 2026-01-02 00:48:24.239251 | orchestrator | 2026-01-02 00:48:24.239259 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-01-02 00:48:24.239267 | orchestrator | Friday 02 January 2026 00:47:06 +0000 (0:00:00.060) 0:01:00.545 ******** 2026-01-02 00:48:24.239275 | orchestrator | 2026-01-02 00:48:24.239283 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2026-01-02 00:48:24.239291 | orchestrator | Friday 02 January 2026 00:47:06 +0000 (0:00:00.078) 0:01:00.624 ******** 2026-01-02 00:48:24.239299 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:48:24.239307 | orchestrator | changed: [testbed-manager] 2026-01-02 00:48:24.239320 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:48:24.239328 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:48:24.239336 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:48:24.239344 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:48:24.239352 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:48:24.239359 | orchestrator | 2026-01-02 00:48:24.239367 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2026-01-02 00:48:24.239375 | orchestrator | Friday 02 January 2026 00:47:39 +0000 (0:00:33.218) 0:01:33.843 ******** 2026-01-02 00:48:24.239383 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:48:24.239391 | orchestrator | changed: [testbed-manager] 2026-01-02 00:48:24.239399 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:48:24.239407 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:48:24.239415 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:48:24.239423 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:48:24.239431 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:48:24.239438 | orchestrator | 2026-01-02 00:48:24.239446 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2026-01-02 00:48:24.239454 | orchestrator | Friday 02 January 2026 00:48:14 +0000 (0:00:35.268) 0:02:09.111 ******** 2026-01-02 00:48:24.239462 | orchestrator | ok: [testbed-manager] 2026-01-02 00:48:24.239471 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:48:24.239479 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:48:24.239487 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:48:24.239494 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:48:24.239502 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:48:24.239510 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:48:24.239518 | orchestrator | 2026-01-02 00:48:24.239526 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2026-01-02 00:48:24.239534 | orchestrator | Friday 02 January 2026 00:48:16 +0000 (0:00:02.000) 0:02:11.112 ******** 2026-01-02 00:48:24.239542 | orchestrator | changed: [testbed-manager] 2026-01-02 00:48:24.239550 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:48:24.239612 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:48:24.239621 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:48:24.239629 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:48:24.239637 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:48:24.239645 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:48:24.239652 | orchestrator | 2026-01-02 00:48:24.239660 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-02 00:48:24.239670 | orchestrator | testbed-manager : ok=24  changed=16  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-02 00:48:24.239684 | orchestrator | testbed-node-0 : ok=20  changed=16  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-02 00:48:24.239693 | orchestrator | testbed-node-1 : ok=20  changed=16  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-02 00:48:24.239701 | orchestrator | testbed-node-2 : ok=20  changed=16  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-02 00:48:24.239709 | orchestrator | testbed-node-3 : ok=20  changed=16  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-02 00:48:24.239717 | orchestrator | testbed-node-4 : ok=20  changed=16  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-02 00:48:24.239724 | orchestrator | testbed-node-5 : ok=20  changed=16  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-02 00:48:24.239732 | orchestrator | 2026-01-02 00:48:24.239740 | orchestrator | 2026-01-02 00:48:24.239749 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-02 00:48:24.239763 | orchestrator | Friday 02 January 2026 00:48:21 +0000 (0:00:05.241) 0:02:16.354 ******** 2026-01-02 00:48:24.239771 | orchestrator | =============================================================================== 2026-01-02 00:48:24.239779 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 35.27s 2026-01-02 00:48:24.239787 | orchestrator | common : Restart fluentd container ------------------------------------- 33.22s 2026-01-02 00:48:24.239795 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 5.74s 2026-01-02 00:48:24.239803 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 5.30s 2026-01-02 00:48:24.239816 | orchestrator | common : Restart cron container ----------------------------------------- 5.24s 2026-01-02 00:48:24.239825 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 5.24s 2026-01-02 00:48:24.239833 | orchestrator | common : Copying over config.json files for services -------------------- 4.03s 2026-01-02 00:48:24.239841 | orchestrator | common : Ensuring config directories exist ------------------------------ 3.90s 2026-01-02 00:48:24.239849 | orchestrator | common : Copying over cron logrotate config file ------------------------ 3.39s 2026-01-02 00:48:24.239857 | orchestrator | service-check-containers : common | Check containers -------------------- 3.22s 2026-01-02 00:48:24.239865 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 2.89s 2026-01-02 00:48:24.239873 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 2.84s 2026-01-02 00:48:24.239881 | orchestrator | common : Ensuring config directories have correct owner and permission --- 2.57s 2026-01-02 00:48:24.239888 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 2.50s 2026-01-02 00:48:24.239896 | orchestrator | common : Copying over kolla.target -------------------------------------- 2.43s 2026-01-02 00:48:24.239904 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 2.26s 2026-01-02 00:48:24.239912 | orchestrator | common : Initializing toolbox container using normal user --------------- 2.00s 2026-01-02 00:48:24.239920 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.51s 2026-01-02 00:48:24.239928 | orchestrator | common : Creating log volume -------------------------------------------- 1.35s 2026-01-02 00:48:24.239936 | orchestrator | common : Copying over /run subdirectories conf -------------------------- 1.26s 2026-01-02 00:48:24.239944 | orchestrator | 2026-01-02 00:48:24 | INFO  | Task 4f61950d-afa4-46f9-b6ea-b5245f132259 is in state STARTED 2026-01-02 00:48:24.239963 | orchestrator | 2026-01-02 00:48:24 | INFO  | Task 13a111fd-528b-4204-8c86-330fa3e483b7 is in state STARTED 2026-01-02 00:48:24.239972 | orchestrator | 2026-01-02 00:48:24 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:48:27.264177 | orchestrator | 2026-01-02 00:48:27 | INFO  | Task e578a9d3-167c-49b4-8c13-0d9896ae5f33 is in state STARTED 2026-01-02 00:48:27.264263 | orchestrator | 2026-01-02 00:48:27 | INFO  | Task c8f607a5-1f4d-45bc-9b0c-2a7f5ff91896 is in state STARTED 2026-01-02 00:48:27.264275 | orchestrator | 2026-01-02 00:48:27 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:48:27.264824 | orchestrator | 2026-01-02 00:48:27 | INFO  | Task ae7cb1e4-503c-476f-9517-35d3597afcab is in state STARTED 2026-01-02 00:48:27.265290 | orchestrator | 2026-01-02 00:48:27 | INFO  | Task 4f61950d-afa4-46f9-b6ea-b5245f132259 is in state STARTED 2026-01-02 00:48:27.266308 | orchestrator | 2026-01-02 00:48:27 | INFO  | Task 13a111fd-528b-4204-8c86-330fa3e483b7 is in state STARTED 2026-01-02 00:48:27.266354 | orchestrator | 2026-01-02 00:48:27 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:48:30.288541 | orchestrator | 2026-01-02 00:48:30 | INFO  | Task e578a9d3-167c-49b4-8c13-0d9896ae5f33 is in state STARTED 2026-01-02 00:48:30.288670 | orchestrator | 2026-01-02 00:48:30 | INFO  | Task c8f607a5-1f4d-45bc-9b0c-2a7f5ff91896 is in state STARTED 2026-01-02 00:48:30.289138 | orchestrator | 2026-01-02 00:48:30 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:48:30.289474 | orchestrator | 2026-01-02 00:48:30 | INFO  | Task ae7cb1e4-503c-476f-9517-35d3597afcab is in state STARTED 2026-01-02 00:48:30.289940 | orchestrator | 2026-01-02 00:48:30 | INFO  | Task 4f61950d-afa4-46f9-b6ea-b5245f132259 is in state STARTED 2026-01-02 00:48:30.290589 | orchestrator | 2026-01-02 00:48:30 | INFO  | Task 13a111fd-528b-4204-8c86-330fa3e483b7 is in state STARTED 2026-01-02 00:48:30.290607 | orchestrator | 2026-01-02 00:48:30 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:48:33.329746 | orchestrator | 2026-01-02 00:48:33 | INFO  | Task e578a9d3-167c-49b4-8c13-0d9896ae5f33 is in state STARTED 2026-01-02 00:48:33.330935 | orchestrator | 2026-01-02 00:48:33 | INFO  | Task c8f607a5-1f4d-45bc-9b0c-2a7f5ff91896 is in state STARTED 2026-01-02 00:48:33.331010 | orchestrator | 2026-01-02 00:48:33 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:48:33.331069 | orchestrator | 2026-01-02 00:48:33 | INFO  | Task ae7cb1e4-503c-476f-9517-35d3597afcab is in state STARTED 2026-01-02 00:48:33.331083 | orchestrator | 2026-01-02 00:48:33 | INFO  | Task 4f61950d-afa4-46f9-b6ea-b5245f132259 is in state STARTED 2026-01-02 00:48:33.331376 | orchestrator | 2026-01-02 00:48:33 | INFO  | Task 13a111fd-528b-4204-8c86-330fa3e483b7 is in state STARTED 2026-01-02 00:48:33.331505 | orchestrator | 2026-01-02 00:48:33 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:48:36.373939 | orchestrator | 2026-01-02 00:48:36 | INFO  | Task e578a9d3-167c-49b4-8c13-0d9896ae5f33 is in state STARTED 2026-01-02 00:48:36.374297 | orchestrator | 2026-01-02 00:48:36 | INFO  | Task c8f607a5-1f4d-45bc-9b0c-2a7f5ff91896 is in state STARTED 2026-01-02 00:48:36.374859 | orchestrator | 2026-01-02 00:48:36 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:48:36.375614 | orchestrator | 2026-01-02 00:48:36 | INFO  | Task ae7cb1e4-503c-476f-9517-35d3597afcab is in state STARTED 2026-01-02 00:48:36.376274 | orchestrator | 2026-01-02 00:48:36 | INFO  | Task 4f61950d-afa4-46f9-b6ea-b5245f132259 is in state STARTED 2026-01-02 00:48:36.377120 | orchestrator | 2026-01-02 00:48:36 | INFO  | Task 13a111fd-528b-4204-8c86-330fa3e483b7 is in state STARTED 2026-01-02 00:48:36.377182 | orchestrator | 2026-01-02 00:48:36 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:48:39.413135 | orchestrator | 2026-01-02 00:48:39 | INFO  | Task e578a9d3-167c-49b4-8c13-0d9896ae5f33 is in state STARTED 2026-01-02 00:48:39.413263 | orchestrator | 2026-01-02 00:48:39 | INFO  | Task c8f607a5-1f4d-45bc-9b0c-2a7f5ff91896 is in state STARTED 2026-01-02 00:48:39.414430 | orchestrator | 2026-01-02 00:48:39 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:48:39.415061 | orchestrator | 2026-01-02 00:48:39 | INFO  | Task ae7cb1e4-503c-476f-9517-35d3597afcab is in state STARTED 2026-01-02 00:48:39.415708 | orchestrator | 2026-01-02 00:48:39 | INFO  | Task 4f61950d-afa4-46f9-b6ea-b5245f132259 is in state STARTED 2026-01-02 00:48:39.416137 | orchestrator | 2026-01-02 00:48:39 | INFO  | Task 13a111fd-528b-4204-8c86-330fa3e483b7 is in state SUCCESS 2026-01-02 00:48:39.416411 | orchestrator | 2026-01-02 00:48:39 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:48:39.416444 | orchestrator | 2026-01-02 00:48:39.416462 | orchestrator | 2026-01-02 00:48:39.416477 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-02 00:48:39.416487 | orchestrator | 2026-01-02 00:48:39.416499 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-02 00:48:39.416580 | orchestrator | Friday 02 January 2026 00:48:27 +0000 (0:00:00.362) 0:00:00.362 ******** 2026-01-02 00:48:39.416601 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:48:39.416619 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:48:39.416637 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:48:39.416653 | orchestrator | 2026-01-02 00:48:39.416666 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-02 00:48:39.416676 | orchestrator | Friday 02 January 2026 00:48:28 +0000 (0:00:00.460) 0:00:00.822 ******** 2026-01-02 00:48:39.416687 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2026-01-02 00:48:39.416697 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2026-01-02 00:48:39.416706 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2026-01-02 00:48:39.416716 | orchestrator | 2026-01-02 00:48:39.416728 | orchestrator | PLAY [Apply role memcached] **************************************************** 2026-01-02 00:48:39.416745 | orchestrator | 2026-01-02 00:48:39.416762 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2026-01-02 00:48:39.416779 | orchestrator | Friday 02 January 2026 00:48:28 +0000 (0:00:00.593) 0:00:01.416 ******** 2026-01-02 00:48:39.416795 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:48:39.416813 | orchestrator | 2026-01-02 00:48:39.416839 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2026-01-02 00:48:39.416875 | orchestrator | Friday 02 January 2026 00:48:29 +0000 (0:00:00.770) 0:00:02.186 ******** 2026-01-02 00:48:39.416885 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-01-02 00:48:39.416895 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-01-02 00:48:39.416904 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-01-02 00:48:39.416914 | orchestrator | 2026-01-02 00:48:39.416924 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2026-01-02 00:48:39.416933 | orchestrator | Friday 02 January 2026 00:48:30 +0000 (0:00:00.838) 0:00:03.025 ******** 2026-01-02 00:48:39.416943 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-01-02 00:48:39.416977 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-01-02 00:48:39.416988 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-01-02 00:48:39.416997 | orchestrator | 2026-01-02 00:48:39.417007 | orchestrator | TASK [service-check-containers : memcached | Check containers] ***************** 2026-01-02 00:48:39.417017 | orchestrator | Friday 02 January 2026 00:48:32 +0000 (0:00:02.209) 0:00:05.234 ******** 2026-01-02 00:48:39.417032 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-01-02 00:48:39.417046 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-01-02 00:48:39.417081 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-01-02 00:48:39.417092 | orchestrator | 2026-01-02 00:48:39.417102 | orchestrator | TASK [service-check-containers : memcached | Notify handlers to restart containers] *** 2026-01-02 00:48:39.417112 | orchestrator | Friday 02 January 2026 00:48:33 +0000 (0:00:01.203) 0:00:06.437 ******** 2026-01-02 00:48:39.417121 | orchestrator | changed: [testbed-node-0] => { 2026-01-02 00:48:39.417131 | orchestrator |  "msg": "Notifying handlers" 2026-01-02 00:48:39.417141 | orchestrator | } 2026-01-02 00:48:39.417151 | orchestrator | changed: [testbed-node-1] => { 2026-01-02 00:48:39.417161 | orchestrator |  "msg": "Notifying handlers" 2026-01-02 00:48:39.417170 | orchestrator | } 2026-01-02 00:48:39.417180 | orchestrator | changed: [testbed-node-2] => { 2026-01-02 00:48:39.417189 | orchestrator |  "msg": "Notifying handlers" 2026-01-02 00:48:39.417199 | orchestrator | } 2026-01-02 00:48:39.417209 | orchestrator | 2026-01-02 00:48:39.417218 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-01-02 00:48:39.417228 | orchestrator | Friday 02 January 2026 00:48:34 +0000 (0:00:00.360) 0:00:06.798 ******** 2026-01-02 00:48:39.417244 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-01-02 00:48:39.417255 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-01-02 00:48:39.417315 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:48:39.417326 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:48:39.417336 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-01-02 00:48:39.417354 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:48:39.417364 | orchestrator | 2026-01-02 00:48:39.417374 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2026-01-02 00:48:39.417384 | orchestrator | Friday 02 January 2026 00:48:35 +0000 (0:00:01.352) 0:00:08.150 ******** 2026-01-02 00:48:39.417393 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:48:39.417403 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:48:39.417413 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:48:39.417422 | orchestrator | 2026-01-02 00:48:39.417639 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-02 00:48:39.417650 | orchestrator | testbed-node-0 : ok=8  changed=5  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-02 00:48:39.417667 | orchestrator | testbed-node-1 : ok=8  changed=5  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-02 00:48:39.417681 | orchestrator | testbed-node-2 : ok=8  changed=5  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-02 00:48:39.417696 | orchestrator | 2026-01-02 00:48:39.417713 | orchestrator | 2026-01-02 00:48:39.417729 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-02 00:48:39.417755 | orchestrator | Friday 02 January 2026 00:48:38 +0000 (0:00:02.982) 0:00:11.133 ******** 2026-01-02 00:48:39.417766 | orchestrator | =============================================================================== 2026-01-02 00:48:39.417775 | orchestrator | memcached : Restart memcached container --------------------------------- 2.98s 2026-01-02 00:48:39.417785 | orchestrator | memcached : Copying over config.json files for services ----------------- 2.21s 2026-01-02 00:48:39.417794 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.35s 2026-01-02 00:48:39.417804 | orchestrator | service-check-containers : memcached | Check containers ----------------- 1.20s 2026-01-02 00:48:39.417813 | orchestrator | memcached : Ensuring config directories exist --------------------------- 0.84s 2026-01-02 00:48:39.417823 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.77s 2026-01-02 00:48:39.417833 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.59s 2026-01-02 00:48:39.417842 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.46s 2026-01-02 00:48:39.417852 | orchestrator | service-check-containers : memcached | Notify handlers to restart containers --- 0.36s 2026-01-02 00:48:42.597583 | orchestrator | 2026-01-02 00:48:42 | INFO  | Task e578a9d3-167c-49b4-8c13-0d9896ae5f33 is in state STARTED 2026-01-02 00:48:42.597661 | orchestrator | 2026-01-02 00:48:42 | INFO  | Task c8f607a5-1f4d-45bc-9b0c-2a7f5ff91896 is in state STARTED 2026-01-02 00:48:42.597682 | orchestrator | 2026-01-02 00:48:42 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:48:42.597687 | orchestrator | 2026-01-02 00:48:42 | INFO  | Task b8f19085-f3f4-4e6f-976f-6e69bc40ef39 is in state STARTED 2026-01-02 00:48:42.597691 | orchestrator | 2026-01-02 00:48:42 | INFO  | Task ae7cb1e4-503c-476f-9517-35d3597afcab is in state STARTED 2026-01-02 00:48:42.597695 | orchestrator | 2026-01-02 00:48:42 | INFO  | Task 4f61950d-afa4-46f9-b6ea-b5245f132259 is in state STARTED 2026-01-02 00:48:42.597700 | orchestrator | 2026-01-02 00:48:42 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:48:45.621607 | orchestrator | 2026-01-02 00:48:45 | INFO  | Task e578a9d3-167c-49b4-8c13-0d9896ae5f33 is in state STARTED 2026-01-02 00:48:45.621736 | orchestrator | 2026-01-02 00:48:45 | INFO  | Task c8f607a5-1f4d-45bc-9b0c-2a7f5ff91896 is in state STARTED 2026-01-02 00:48:45.621749 | orchestrator | 2026-01-02 00:48:45 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:48:45.621757 | orchestrator | 2026-01-02 00:48:45 | INFO  | Task b8f19085-f3f4-4e6f-976f-6e69bc40ef39 is in state STARTED 2026-01-02 00:48:45.621764 | orchestrator | 2026-01-02 00:48:45 | INFO  | Task ae7cb1e4-503c-476f-9517-35d3597afcab is in state STARTED 2026-01-02 00:48:45.621772 | orchestrator | 2026-01-02 00:48:45 | INFO  | Task 4f61950d-afa4-46f9-b6ea-b5245f132259 is in state STARTED 2026-01-02 00:48:45.621779 | orchestrator | 2026-01-02 00:48:45 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:48:48.730594 | orchestrator | 2026-01-02 00:48:48 | INFO  | Task e578a9d3-167c-49b4-8c13-0d9896ae5f33 is in state STARTED 2026-01-02 00:48:48.730769 | orchestrator | 2026-01-02 00:48:48 | INFO  | Task c8f607a5-1f4d-45bc-9b0c-2a7f5ff91896 is in state STARTED 2026-01-02 00:48:48.730789 | orchestrator | 2026-01-02 00:48:48 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:48:48.730801 | orchestrator | 2026-01-02 00:48:48 | INFO  | Task b8f19085-f3f4-4e6f-976f-6e69bc40ef39 is in state STARTED 2026-01-02 00:48:48.730813 | orchestrator | 2026-01-02 00:48:48 | INFO  | Task ae7cb1e4-503c-476f-9517-35d3597afcab is in state STARTED 2026-01-02 00:48:48.730825 | orchestrator | 2026-01-02 00:48:48 | INFO  | Task 4f61950d-afa4-46f9-b6ea-b5245f132259 is in state STARTED 2026-01-02 00:48:48.730836 | orchestrator | 2026-01-02 00:48:48 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:48:51.751935 | orchestrator | 2026-01-02 00:48:51 | INFO  | Task e578a9d3-167c-49b4-8c13-0d9896ae5f33 is in state STARTED 2026-01-02 00:48:51.752092 | orchestrator | 2026-01-02 00:48:51 | INFO  | Task c8f607a5-1f4d-45bc-9b0c-2a7f5ff91896 is in state STARTED 2026-01-02 00:48:51.752422 | orchestrator | 2026-01-02 00:48:51 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:48:51.758185 | orchestrator | 2026-01-02 00:48:51 | INFO  | Task b8f19085-f3f4-4e6f-976f-6e69bc40ef39 is in state STARTED 2026-01-02 00:48:51.758246 | orchestrator | 2026-01-02 00:48:51 | INFO  | Task ae7cb1e4-503c-476f-9517-35d3597afcab is in state STARTED 2026-01-02 00:48:51.758258 | orchestrator | 2026-01-02 00:48:51 | INFO  | Task 4f61950d-afa4-46f9-b6ea-b5245f132259 is in state STARTED 2026-01-02 00:48:51.758269 | orchestrator | 2026-01-02 00:48:51 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:48:54.782371 | orchestrator | 2026-01-02 00:48:54 | INFO  | Task e578a9d3-167c-49b4-8c13-0d9896ae5f33 is in state STARTED 2026-01-02 00:48:54.782612 | orchestrator | 2026-01-02 00:48:54 | INFO  | Task c8f607a5-1f4d-45bc-9b0c-2a7f5ff91896 is in state STARTED 2026-01-02 00:48:54.783345 | orchestrator | 2026-01-02 00:48:54 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:48:54.784046 | orchestrator | 2026-01-02 00:48:54 | INFO  | Task b8f19085-f3f4-4e6f-976f-6e69bc40ef39 is in state STARTED 2026-01-02 00:48:54.784807 | orchestrator | 2026-01-02 00:48:54 | INFO  | Task ae7cb1e4-503c-476f-9517-35d3597afcab is in state STARTED 2026-01-02 00:48:54.785696 | orchestrator | 2026-01-02 00:48:54 | INFO  | Task 4f61950d-afa4-46f9-b6ea-b5245f132259 is in state STARTED 2026-01-02 00:48:54.785824 | orchestrator | 2026-01-02 00:48:54 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:48:57.826306 | orchestrator | 2026-01-02 00:48:57 | INFO  | Task e578a9d3-167c-49b4-8c13-0d9896ae5f33 is in state STARTED 2026-01-02 00:48:57.826442 | orchestrator | 2026-01-02 00:48:57 | INFO  | Task c8f607a5-1f4d-45bc-9b0c-2a7f5ff91896 is in state STARTED 2026-01-02 00:48:57.826472 | orchestrator | 2026-01-02 00:48:57 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:48:57.826484 | orchestrator | 2026-01-02 00:48:57 | INFO  | Task b8f19085-f3f4-4e6f-976f-6e69bc40ef39 is in state STARTED 2026-01-02 00:48:57.826494 | orchestrator | 2026-01-02 00:48:57 | INFO  | Task ae7cb1e4-503c-476f-9517-35d3597afcab is in state STARTED 2026-01-02 00:48:57.826504 | orchestrator | 2026-01-02 00:48:57 | INFO  | Task 4f61950d-afa4-46f9-b6ea-b5245f132259 is in state STARTED 2026-01-02 00:48:57.826515 | orchestrator | 2026-01-02 00:48:57 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:49:00.986815 | orchestrator | 2026-01-02 00:49:00.986930 | orchestrator | 2026-01-02 00:49:00.987020 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-02 00:49:00.987034 | orchestrator | 2026-01-02 00:49:00.987046 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-02 00:49:00.987058 | orchestrator | Friday 02 January 2026 00:48:27 +0000 (0:00:00.322) 0:00:00.322 ******** 2026-01-02 00:49:00.987070 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:49:00.987082 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:49:00.987095 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:49:00.987106 | orchestrator | 2026-01-02 00:49:00.987117 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-02 00:49:00.987129 | orchestrator | Friday 02 January 2026 00:48:28 +0000 (0:00:00.269) 0:00:00.592 ******** 2026-01-02 00:49:00.987139 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2026-01-02 00:49:00.987151 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2026-01-02 00:49:00.987162 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2026-01-02 00:49:00.987173 | orchestrator | 2026-01-02 00:49:00.987184 | orchestrator | PLAY [Apply role redis] ******************************************************** 2026-01-02 00:49:00.987203 | orchestrator | 2026-01-02 00:49:00.987216 | orchestrator | TASK [redis : include_tasks] *************************************************** 2026-01-02 00:49:00.987228 | orchestrator | Friday 02 January 2026 00:48:28 +0000 (0:00:00.557) 0:00:01.150 ******** 2026-01-02 00:49:00.987239 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:49:00.987259 | orchestrator | 2026-01-02 00:49:00.987270 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2026-01-02 00:49:00.987281 | orchestrator | Friday 02 January 2026 00:48:29 +0000 (0:00:00.615) 0:00:01.765 ******** 2026-01-02 00:49:00.987295 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-02 00:49:00.987312 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-02 00:49:00.987325 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-02 00:49:00.987360 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-02 00:49:00.987407 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-02 00:49:00.987421 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-02 00:49:00.987433 | orchestrator | 2026-01-02 00:49:00.987445 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2026-01-02 00:49:00.987456 | orchestrator | Friday 02 January 2026 00:48:30 +0000 (0:00:01.478) 0:00:03.243 ******** 2026-01-02 00:49:00.987468 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-02 00:49:00.987479 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-02 00:49:00.987491 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-02 00:49:00.987512 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-02 00:49:00.987536 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-02 00:49:00.987558 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-02 00:49:00.987570 | orchestrator | 2026-01-02 00:49:00.987582 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2026-01-02 00:49:00.987593 | orchestrator | Friday 02 January 2026 00:48:34 +0000 (0:00:03.739) 0:00:06.983 ******** 2026-01-02 00:49:00.987604 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-02 00:49:00.987616 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-02 00:49:00.987627 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-02 00:49:00.987645 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-02 00:49:00.987662 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-02 00:49:00.987688 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-02 00:49:00.987701 | orchestrator | 2026-01-02 00:49:00.987715 | orchestrator | TASK [service-check-containers : redis | Check containers] ********************* 2026-01-02 00:49:00.987734 | orchestrator | Friday 02 January 2026 00:48:37 +0000 (0:00:02.790) 0:00:09.774 ******** 2026-01-02 00:49:00.987753 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-02 00:49:00.987772 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-02 00:49:00.987803 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-02 00:49:00.987816 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-02 00:49:00.987827 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-02 00:49:00.987852 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-02 00:49:00.987864 | orchestrator | 2026-01-02 00:49:00.987876 | orchestrator | TASK [service-check-containers : redis | Notify handlers to restart containers] *** 2026-01-02 00:49:00.987892 | orchestrator | Friday 02 January 2026 00:48:39 +0000 (0:00:01.962) 0:00:11.737 ******** 2026-01-02 00:49:00.987904 | orchestrator | changed: [testbed-node-0] => { 2026-01-02 00:49:00.987915 | orchestrator |  "msg": "Notifying handlers" 2026-01-02 00:49:00.987930 | orchestrator | } 2026-01-02 00:49:00.987979 | orchestrator | changed: [testbed-node-1] => { 2026-01-02 00:49:00.987998 | orchestrator |  "msg": "Notifying handlers" 2026-01-02 00:49:00.988018 | orchestrator | } 2026-01-02 00:49:00.988036 | orchestrator | changed: [testbed-node-2] => { 2026-01-02 00:49:00.988054 | orchestrator |  "msg": "Notifying handlers" 2026-01-02 00:49:00.988073 | orchestrator | } 2026-01-02 00:49:00.988092 | orchestrator | 2026-01-02 00:49:00.988108 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-01-02 00:49:00.988119 | orchestrator | Friday 02 January 2026 00:48:39 +0000 (0:00:00.407) 0:00:12.145 ******** 2026-01-02 00:49:00.988131 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}})  2026-01-02 00:49:00.988152 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}})  2026-01-02 00:49:00.988166 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:49:00.988184 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}})  2026-01-02 00:49:00.988197 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}})  2026-01-02 00:49:00.988208 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:49:00.988225 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}})  2026-01-02 00:49:00.988246 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}})  2026-01-02 00:49:00.988258 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:49:00.988269 | orchestrator | 2026-01-02 00:49:00.988280 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-01-02 00:49:00.988291 | orchestrator | Friday 02 January 2026 00:48:40 +0000 (0:00:01.048) 0:00:13.193 ******** 2026-01-02 00:49:00.988302 | orchestrator | 2026-01-02 00:49:00.988320 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-01-02 00:49:00.988331 | orchestrator | Friday 02 January 2026 00:48:40 +0000 (0:00:00.049) 0:00:13.243 ******** 2026-01-02 00:49:00.988341 | orchestrator | 2026-01-02 00:49:00.988352 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-01-02 00:49:00.988363 | orchestrator | Friday 02 January 2026 00:48:40 +0000 (0:00:00.062) 0:00:13.305 ******** 2026-01-02 00:49:00.988374 | orchestrator | 2026-01-02 00:49:00.988385 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2026-01-02 00:49:00.988396 | orchestrator | Friday 02 January 2026 00:48:40 +0000 (0:00:00.071) 0:00:13.376 ******** 2026-01-02 00:49:00.988406 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:49:00.988417 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:49:00.988428 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:49:00.988439 | orchestrator | 2026-01-02 00:49:00.988450 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2026-01-02 00:49:00.988464 | orchestrator | Friday 02 January 2026 00:48:49 +0000 (0:00:08.709) 0:00:22.085 ******** 2026-01-02 00:49:00.988483 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:49:00.988504 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:49:00.988524 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:49:00.988544 | orchestrator | 2026-01-02 00:49:00.988564 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-02 00:49:00.988585 | orchestrator | testbed-node-0 : ok=10  changed=7  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-02 00:49:00.988607 | orchestrator | testbed-node-1 : ok=10  changed=7  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-02 00:49:00.988627 | orchestrator | testbed-node-2 : ok=10  changed=7  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-02 00:49:00.988646 | orchestrator | 2026-01-02 00:49:00.988662 | orchestrator | 2026-01-02 00:49:00.988673 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-02 00:49:00.988684 | orchestrator | Friday 02 January 2026 00:48:58 +0000 (0:00:09.354) 0:00:31.439 ******** 2026-01-02 00:49:00.988699 | orchestrator | =============================================================================== 2026-01-02 00:49:00.988715 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 9.35s 2026-01-02 00:49:00.988726 | orchestrator | redis : Restart redis container ----------------------------------------- 8.71s 2026-01-02 00:49:00.988737 | orchestrator | redis : Copying over default config.json files -------------------------- 3.74s 2026-01-02 00:49:00.988747 | orchestrator | redis : Copying over redis config files --------------------------------- 2.79s 2026-01-02 00:49:00.988758 | orchestrator | service-check-containers : redis | Check containers --------------------- 1.96s 2026-01-02 00:49:00.988769 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.48s 2026-01-02 00:49:00.988785 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.05s 2026-01-02 00:49:00.988799 | orchestrator | redis : include_tasks --------------------------------------------------- 0.62s 2026-01-02 00:49:00.988809 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.56s 2026-01-02 00:49:00.988820 | orchestrator | service-check-containers : redis | Notify handlers to restart containers --- 0.41s 2026-01-02 00:49:00.988831 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.27s 2026-01-02 00:49:00.988841 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.18s 2026-01-02 00:49:00.988852 | orchestrator | 2026-01-02 00:49:00 | INFO  | Task e578a9d3-167c-49b4-8c13-0d9896ae5f33 is in state STARTED 2026-01-02 00:49:00.988864 | orchestrator | 2026-01-02 00:49:00 | INFO  | Task c8f607a5-1f4d-45bc-9b0c-2a7f5ff91896 is in state STARTED 2026-01-02 00:49:00.988875 | orchestrator | 2026-01-02 00:49:00 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:49:00.988893 | orchestrator | 2026-01-02 00:49:00 | INFO  | Task b8f19085-f3f4-4e6f-976f-6e69bc40ef39 is in state STARTED 2026-01-02 00:49:00.988904 | orchestrator | 2026-01-02 00:49:00 | INFO  | Task ae7cb1e4-503c-476f-9517-35d3597afcab is in state STARTED 2026-01-02 00:49:00.988922 | orchestrator | 2026-01-02 00:49:00 | INFO  | Task 4f61950d-afa4-46f9-b6ea-b5245f132259 is in state SUCCESS 2026-01-02 00:49:00.988933 | orchestrator | 2026-01-02 00:49:00 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:49:04.356217 | orchestrator | 2026-01-02 00:49:04 | INFO  | Task e578a9d3-167c-49b4-8c13-0d9896ae5f33 is in state STARTED 2026-01-02 00:49:04.379865 | orchestrator | 2026-01-02 00:49:04 | INFO  | Task c8f607a5-1f4d-45bc-9b0c-2a7f5ff91896 is in state STARTED 2026-01-02 00:49:04.384790 | orchestrator | 2026-01-02 00:49:04 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:49:04.391106 | orchestrator | 2026-01-02 00:49:04 | INFO  | Task b8f19085-f3f4-4e6f-976f-6e69bc40ef39 is in state STARTED 2026-01-02 00:49:04.394814 | orchestrator | 2026-01-02 00:49:04 | INFO  | Task ae7cb1e4-503c-476f-9517-35d3597afcab is in state STARTED 2026-01-02 00:49:04.394873 | orchestrator | 2026-01-02 00:49:04 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:49:07.457264 | orchestrator | 2026-01-02 00:49:07 | INFO  | Task e578a9d3-167c-49b4-8c13-0d9896ae5f33 is in state STARTED 2026-01-02 00:49:07.457455 | orchestrator | 2026-01-02 00:49:07 | INFO  | Task c8f607a5-1f4d-45bc-9b0c-2a7f5ff91896 is in state STARTED 2026-01-02 00:49:07.457465 | orchestrator | 2026-01-02 00:49:07 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:49:07.457470 | orchestrator | 2026-01-02 00:49:07 | INFO  | Task b8f19085-f3f4-4e6f-976f-6e69bc40ef39 is in state STARTED 2026-01-02 00:49:07.457788 | orchestrator | 2026-01-02 00:49:07 | INFO  | Task ae7cb1e4-503c-476f-9517-35d3597afcab is in state STARTED 2026-01-02 00:49:07.457797 | orchestrator | 2026-01-02 00:49:07 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:49:10.476853 | orchestrator | 2026-01-02 00:49:10 | INFO  | Task e578a9d3-167c-49b4-8c13-0d9896ae5f33 is in state STARTED 2026-01-02 00:49:10.477247 | orchestrator | 2026-01-02 00:49:10 | INFO  | Task c8f607a5-1f4d-45bc-9b0c-2a7f5ff91896 is in state STARTED 2026-01-02 00:49:10.479848 | orchestrator | 2026-01-02 00:49:10 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:49:10.482734 | orchestrator | 2026-01-02 00:49:10 | INFO  | Task b8f19085-f3f4-4e6f-976f-6e69bc40ef39 is in state STARTED 2026-01-02 00:49:10.483735 | orchestrator | 2026-01-02 00:49:10 | INFO  | Task ae7cb1e4-503c-476f-9517-35d3597afcab is in state STARTED 2026-01-02 00:49:10.483747 | orchestrator | 2026-01-02 00:49:10 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:49:13.515030 | orchestrator | 2026-01-02 00:49:13 | INFO  | Task e578a9d3-167c-49b4-8c13-0d9896ae5f33 is in state STARTED 2026-01-02 00:49:13.516963 | orchestrator | 2026-01-02 00:49:13 | INFO  | Task c8f607a5-1f4d-45bc-9b0c-2a7f5ff91896 is in state STARTED 2026-01-02 00:49:13.519198 | orchestrator | 2026-01-02 00:49:13 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:49:13.521199 | orchestrator | 2026-01-02 00:49:13 | INFO  | Task b8f19085-f3f4-4e6f-976f-6e69bc40ef39 is in state STARTED 2026-01-02 00:49:13.522971 | orchestrator | 2026-01-02 00:49:13 | INFO  | Task ae7cb1e4-503c-476f-9517-35d3597afcab is in state STARTED 2026-01-02 00:49:13.523015 | orchestrator | 2026-01-02 00:49:13 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:49:16.557542 | orchestrator | 2026-01-02 00:49:16 | INFO  | Task e578a9d3-167c-49b4-8c13-0d9896ae5f33 is in state STARTED 2026-01-02 00:49:16.558473 | orchestrator | 2026-01-02 00:49:16 | INFO  | Task c8f607a5-1f4d-45bc-9b0c-2a7f5ff91896 is in state STARTED 2026-01-02 00:49:16.559852 | orchestrator | 2026-01-02 00:49:16 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:49:16.560977 | orchestrator | 2026-01-02 00:49:16 | INFO  | Task b8f19085-f3f4-4e6f-976f-6e69bc40ef39 is in state STARTED 2026-01-02 00:49:16.562209 | orchestrator | 2026-01-02 00:49:16 | INFO  | Task ae7cb1e4-503c-476f-9517-35d3597afcab is in state STARTED 2026-01-02 00:49:16.562272 | orchestrator | 2026-01-02 00:49:16 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:49:19.594451 | orchestrator | 2026-01-02 00:49:19 | INFO  | Task e578a9d3-167c-49b4-8c13-0d9896ae5f33 is in state STARTED 2026-01-02 00:49:19.595633 | orchestrator | 2026-01-02 00:49:19 | INFO  | Task c8f607a5-1f4d-45bc-9b0c-2a7f5ff91896 is in state STARTED 2026-01-02 00:49:19.597190 | orchestrator | 2026-01-02 00:49:19 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:49:19.598237 | orchestrator | 2026-01-02 00:49:19 | INFO  | Task b8f19085-f3f4-4e6f-976f-6e69bc40ef39 is in state STARTED 2026-01-02 00:49:19.599365 | orchestrator | 2026-01-02 00:49:19 | INFO  | Task ae7cb1e4-503c-476f-9517-35d3597afcab is in state STARTED 2026-01-02 00:49:19.599478 | orchestrator | 2026-01-02 00:49:19 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:49:22.638260 | orchestrator | 2026-01-02 00:49:22 | INFO  | Task e578a9d3-167c-49b4-8c13-0d9896ae5f33 is in state STARTED 2026-01-02 00:49:22.639248 | orchestrator | 2026-01-02 00:49:22 | INFO  | Task c8f607a5-1f4d-45bc-9b0c-2a7f5ff91896 is in state STARTED 2026-01-02 00:49:22.641637 | orchestrator | 2026-01-02 00:49:22 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:49:22.642509 | orchestrator | 2026-01-02 00:49:22 | INFO  | Task b8f19085-f3f4-4e6f-976f-6e69bc40ef39 is in state STARTED 2026-01-02 00:49:22.644339 | orchestrator | 2026-01-02 00:49:22 | INFO  | Task ae7cb1e4-503c-476f-9517-35d3597afcab is in state STARTED 2026-01-02 00:49:22.644367 | orchestrator | 2026-01-02 00:49:22 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:49:25.685435 | orchestrator | 2026-01-02 00:49:25 | INFO  | Task e578a9d3-167c-49b4-8c13-0d9896ae5f33 is in state STARTED 2026-01-02 00:49:25.686721 | orchestrator | 2026-01-02 00:49:25 | INFO  | Task c8f607a5-1f4d-45bc-9b0c-2a7f5ff91896 is in state STARTED 2026-01-02 00:49:25.688315 | orchestrator | 2026-01-02 00:49:25 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:49:25.690565 | orchestrator | 2026-01-02 00:49:25 | INFO  | Task b8f19085-f3f4-4e6f-976f-6e69bc40ef39 is in state STARTED 2026-01-02 00:49:25.699017 | orchestrator | 2026-01-02 00:49:25 | INFO  | Task ae7cb1e4-503c-476f-9517-35d3597afcab is in state STARTED 2026-01-02 00:49:25.699089 | orchestrator | 2026-01-02 00:49:25 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:49:28.735706 | orchestrator | 2026-01-02 00:49:28 | INFO  | Task e578a9d3-167c-49b4-8c13-0d9896ae5f33 is in state STARTED 2026-01-02 00:49:28.735829 | orchestrator | 2026-01-02 00:49:28 | INFO  | Task c8f607a5-1f4d-45bc-9b0c-2a7f5ff91896 is in state STARTED 2026-01-02 00:49:28.737415 | orchestrator | 2026-01-02 00:49:28 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:49:28.738261 | orchestrator | 2026-01-02 00:49:28 | INFO  | Task b8f19085-f3f4-4e6f-976f-6e69bc40ef39 is in state STARTED 2026-01-02 00:49:28.740010 | orchestrator | 2026-01-02 00:49:28 | INFO  | Task ae7cb1e4-503c-476f-9517-35d3597afcab is in state STARTED 2026-01-02 00:49:28.740055 | orchestrator | 2026-01-02 00:49:28 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:49:31.777348 | orchestrator | 2026-01-02 00:49:31 | INFO  | Task e578a9d3-167c-49b4-8c13-0d9896ae5f33 is in state STARTED 2026-01-02 00:49:31.778450 | orchestrator | 2026-01-02 00:49:31 | INFO  | Task c8f607a5-1f4d-45bc-9b0c-2a7f5ff91896 is in state STARTED 2026-01-02 00:49:31.779166 | orchestrator | 2026-01-02 00:49:31 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:49:31.779998 | orchestrator | 2026-01-02 00:49:31 | INFO  | Task b8f19085-f3f4-4e6f-976f-6e69bc40ef39 is in state STARTED 2026-01-02 00:49:31.782171 | orchestrator | 2026-01-02 00:49:31 | INFO  | Task ae7cb1e4-503c-476f-9517-35d3597afcab is in state STARTED 2026-01-02 00:49:31.782293 | orchestrator | 2026-01-02 00:49:31 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:49:34.852412 | orchestrator | 2026-01-02 00:49:34 | INFO  | Task e578a9d3-167c-49b4-8c13-0d9896ae5f33 is in state STARTED 2026-01-02 00:49:34.858092 | orchestrator | 2026-01-02 00:49:34 | INFO  | Task c8f607a5-1f4d-45bc-9b0c-2a7f5ff91896 is in state STARTED 2026-01-02 00:49:34.859625 | orchestrator | 2026-01-02 00:49:34 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:49:34.862191 | orchestrator | 2026-01-02 00:49:34 | INFO  | Task b8f19085-f3f4-4e6f-976f-6e69bc40ef39 is in state STARTED 2026-01-02 00:49:34.863898 | orchestrator | 2026-01-02 00:49:34 | INFO  | Task ae7cb1e4-503c-476f-9517-35d3597afcab is in state STARTED 2026-01-02 00:49:34.865143 | orchestrator | 2026-01-02 00:49:34 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:49:37.961142 | orchestrator | 2026-01-02 00:49:37 | INFO  | Task e578a9d3-167c-49b4-8c13-0d9896ae5f33 is in state SUCCESS 2026-01-02 00:49:37.961256 | orchestrator | 2026-01-02 00:49:37 | INFO  | Task c8f607a5-1f4d-45bc-9b0c-2a7f5ff91896 is in state STARTED 2026-01-02 00:49:37.962543 | orchestrator | 2026-01-02 00:49:37.962591 | orchestrator | 2026-01-02 00:49:37.962605 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-02 00:49:37.962618 | orchestrator | 2026-01-02 00:49:37.962629 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-02 00:49:37.962641 | orchestrator | Friday 02 January 2026 00:48:28 +0000 (0:00:00.407) 0:00:00.407 ******** 2026-01-02 00:49:37.962652 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:49:37.962665 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:49:37.962676 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:49:37.962687 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:49:37.962698 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:49:37.962709 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:49:37.962720 | orchestrator | 2026-01-02 00:49:37.962744 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-02 00:49:37.962756 | orchestrator | Friday 02 January 2026 00:48:28 +0000 (0:00:00.827) 0:00:01.234 ******** 2026-01-02 00:49:37.962767 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-01-02 00:49:37.962778 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-01-02 00:49:37.962789 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-01-02 00:49:37.962800 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-01-02 00:49:37.962811 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-01-02 00:49:37.962822 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-01-02 00:49:37.962861 | orchestrator | 2026-01-02 00:49:37.962873 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2026-01-02 00:49:37.962884 | orchestrator | 2026-01-02 00:49:37.962895 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2026-01-02 00:49:37.962906 | orchestrator | Friday 02 January 2026 00:48:29 +0000 (0:00:00.805) 0:00:02.040 ******** 2026-01-02 00:49:37.962918 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-02 00:49:37.963011 | orchestrator | 2026-01-02 00:49:37.963023 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-01-02 00:49:37.963035 | orchestrator | Friday 02 January 2026 00:48:30 +0000 (0:00:01.106) 0:00:03.147 ******** 2026-01-02 00:49:37.963046 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-01-02 00:49:37.963058 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-01-02 00:49:37.963069 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-01-02 00:49:37.963080 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-01-02 00:49:37.963094 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-01-02 00:49:37.963107 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-01-02 00:49:37.963119 | orchestrator | 2026-01-02 00:49:37.963131 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-01-02 00:49:37.963144 | orchestrator | Friday 02 January 2026 00:48:32 +0000 (0:00:01.855) 0:00:05.002 ******** 2026-01-02 00:49:37.963156 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-01-02 00:49:37.963168 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-01-02 00:49:37.963181 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-01-02 00:49:37.963193 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-01-02 00:49:37.963205 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-01-02 00:49:37.963217 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-01-02 00:49:37.963230 | orchestrator | 2026-01-02 00:49:37.963242 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-01-02 00:49:37.963255 | orchestrator | Friday 02 January 2026 00:48:34 +0000 (0:00:01.680) 0:00:06.683 ******** 2026-01-02 00:49:37.963268 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2026-01-02 00:49:37.963281 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:49:37.963294 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2026-01-02 00:49:37.963307 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:49:37.963318 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2026-01-02 00:49:37.963330 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:49:37.963343 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2026-01-02 00:49:37.963356 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:49:37.963368 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2026-01-02 00:49:37.963382 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:49:37.963394 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2026-01-02 00:49:37.963407 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:49:37.963420 | orchestrator | 2026-01-02 00:49:37.963432 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2026-01-02 00:49:37.963443 | orchestrator | Friday 02 January 2026 00:48:35 +0000 (0:00:01.411) 0:00:08.094 ******** 2026-01-02 00:49:37.963468 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:49:37.963479 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:49:37.963490 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:49:37.963599 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:49:37.963615 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:49:37.963626 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:49:37.963637 | orchestrator | 2026-01-02 00:49:37.963648 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2026-01-02 00:49:37.963686 | orchestrator | Friday 02 January 2026 00:48:36 +0000 (0:00:00.673) 0:00:08.768 ******** 2026-01-02 00:49:37.963719 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-02 00:49:37.963738 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-02 00:49:37.963751 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-02 00:49:37.963762 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-02 00:49:37.963774 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-02 00:49:37.963801 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-02 00:49:37.963828 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-02 00:49:37.963841 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-02 00:49:37.963853 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-02 00:49:37.963865 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-02 00:49:37.963876 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-02 00:49:37.963907 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-02 00:49:37.963946 | orchestrator | 2026-01-02 00:49:37.963959 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2026-01-02 00:49:37.963970 | orchestrator | Friday 02 January 2026 00:48:38 +0000 (0:00:01.617) 0:00:10.386 ******** 2026-01-02 00:49:37.963982 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-02 00:49:37.963994 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-02 00:49:37.964005 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-02 00:49:37.964017 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-02 00:49:37.964028 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-02 00:49:37.964061 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-02 00:49:37.964074 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-02 00:49:37.964085 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-02 00:49:37.964097 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-02 00:49:37.964108 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-02 00:49:37.964126 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-02 00:49:37.964150 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-02 00:49:37.964163 | orchestrator | 2026-01-02 00:49:37.964174 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2026-01-02 00:49:37.964185 | orchestrator | Friday 02 January 2026 00:48:41 +0000 (0:00:02.994) 0:00:13.381 ******** 2026-01-02 00:49:37.964196 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:49:37.964207 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:49:37.964218 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:49:37.964231 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:49:37.964243 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:49:37.964256 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:49:37.964269 | orchestrator | 2026-01-02 00:49:37.964282 | orchestrator | TASK [service-check-containers : openvswitch | Check containers] *************** 2026-01-02 00:49:37.964294 | orchestrator | Friday 02 January 2026 00:48:43 +0000 (0:00:01.891) 0:00:15.273 ******** 2026-01-02 00:49:37.964308 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-02 00:49:37.964323 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-02 00:49:37.964337 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-02 00:49:37.964363 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-02 00:49:37.964389 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-02 00:49:37.964402 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-02 00:49:37.964413 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-02 00:49:37.964425 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-02 00:49:37.964443 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-02 00:49:37.964460 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-02 00:49:37.964479 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-02 00:49:37.964491 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-02 00:49:37.964503 | orchestrator | 2026-01-02 00:49:37.964530 | orchestrator | TASK [service-check-containers : openvswitch | Notify handlers to restart containers] *** 2026-01-02 00:49:37.964541 | orchestrator | Friday 02 January 2026 00:48:45 +0000 (0:00:02.522) 0:00:17.795 ******** 2026-01-02 00:49:37.964657 | orchestrator | changed: [testbed-node-0] => { 2026-01-02 00:49:37.964695 | orchestrator |  "msg": "Notifying handlers" 2026-01-02 00:49:37.964707 | orchestrator | } 2026-01-02 00:49:37.964719 | orchestrator | changed: [testbed-node-1] => { 2026-01-02 00:49:37.964742 | orchestrator |  "msg": "Notifying handlers" 2026-01-02 00:49:37.964753 | orchestrator | } 2026-01-02 00:49:37.964764 | orchestrator | changed: [testbed-node-2] => { 2026-01-02 00:49:37.964775 | orchestrator |  "msg": "Notifying handlers" 2026-01-02 00:49:37.964786 | orchestrator | } 2026-01-02 00:49:37.964797 | orchestrator | changed: [testbed-node-3] => { 2026-01-02 00:49:37.964807 | orchestrator |  "msg": "Notifying handlers" 2026-01-02 00:49:37.964818 | orchestrator | } 2026-01-02 00:49:37.964829 | orchestrator | changed: [testbed-node-4] => { 2026-01-02 00:49:37.964849 | orchestrator |  "msg": "Notifying handlers" 2026-01-02 00:49:37.964860 | orchestrator | } 2026-01-02 00:49:37.964871 | orchestrator | changed: [testbed-node-5] => { 2026-01-02 00:49:37.964882 | orchestrator |  "msg": "Notifying handlers" 2026-01-02 00:49:37.964893 | orchestrator | } 2026-01-02 00:49:37.964904 | orchestrator | 2026-01-02 00:49:37.964916 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-01-02 00:49:37.964955 | orchestrator | Friday 02 January 2026 00:48:46 +0000 (0:00:00.650) 0:00:18.445 ******** 2026-01-02 00:49:37.964967 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-01-02 00:49:37.964979 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-01-02 00:49:37.965005 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-01-02 00:49:37.965018 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-01-02 00:49:37.965030 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:49:37.965042 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-01-02 00:49:37.965075 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-01-02 00:49:37.965087 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:49:37.965099 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-01-02 00:49:37.965111 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-01-02 00:49:37.965122 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:49:37.965133 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:49:37.965152 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-01-02 00:49:37.965164 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-01-02 00:49:37.965175 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:49:37.966014 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-01-02 00:49:37.966110 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-01-02 00:49:37.966122 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:49:37.966134 | orchestrator | 2026-01-02 00:49:37.966153 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-01-02 00:49:37.966171 | orchestrator | Friday 02 January 2026 00:48:47 +0000 (0:00:01.523) 0:00:19.968 ******** 2026-01-02 00:49:37.966190 | orchestrator | 2026-01-02 00:49:37.966209 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-01-02 00:49:37.966230 | orchestrator | Friday 02 January 2026 00:48:48 +0000 (0:00:00.351) 0:00:20.320 ******** 2026-01-02 00:49:37.966249 | orchestrator | 2026-01-02 00:49:37.966268 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-01-02 00:49:37.966279 | orchestrator | Friday 02 January 2026 00:48:48 +0000 (0:00:00.279) 0:00:20.599 ******** 2026-01-02 00:49:37.966290 | orchestrator | 2026-01-02 00:49:37.966301 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-01-02 00:49:37.966311 | orchestrator | Friday 02 January 2026 00:48:48 +0000 (0:00:00.364) 0:00:20.964 ******** 2026-01-02 00:49:37.966322 | orchestrator | 2026-01-02 00:49:37.966350 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-01-02 00:49:37.966362 | orchestrator | Friday 02 January 2026 00:48:49 +0000 (0:00:00.355) 0:00:21.319 ******** 2026-01-02 00:49:37.966372 | orchestrator | 2026-01-02 00:49:37.966383 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-01-02 00:49:37.966394 | orchestrator | Friday 02 January 2026 00:48:49 +0000 (0:00:00.187) 0:00:21.506 ******** 2026-01-02 00:49:37.966405 | orchestrator | 2026-01-02 00:49:37.966416 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2026-01-02 00:49:37.966427 | orchestrator | Friday 02 January 2026 00:48:49 +0000 (0:00:00.179) 0:00:21.686 ******** 2026-01-02 00:49:37.966565 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:49:37.966577 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:49:37.966587 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:49:37.966598 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:49:37.966609 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:49:37.966620 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:49:37.966631 | orchestrator | 2026-01-02 00:49:37.966642 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2026-01-02 00:49:37.966665 | orchestrator | Friday 02 January 2026 00:48:59 +0000 (0:00:10.117) 0:00:31.803 ******** 2026-01-02 00:49:37.966677 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:49:37.966689 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:49:37.966714 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:49:37.966725 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:49:37.966736 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:49:37.966746 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:49:37.966757 | orchestrator | 2026-01-02 00:49:37.966768 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-01-02 00:49:37.966779 | orchestrator | Friday 02 January 2026 00:49:01 +0000 (0:00:02.119) 0:00:33.923 ******** 2026-01-02 00:49:37.966790 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:49:37.966801 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:49:37.966812 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:49:37.966822 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:49:37.966833 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:49:37.966843 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:49:37.966854 | orchestrator | 2026-01-02 00:49:37.966865 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2026-01-02 00:49:37.966876 | orchestrator | Friday 02 January 2026 00:49:12 +0000 (0:00:10.362) 0:00:44.286 ******** 2026-01-02 00:49:37.966903 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2026-01-02 00:49:37.966915 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2026-01-02 00:49:37.966989 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2026-01-02 00:49:37.967001 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2026-01-02 00:49:37.967012 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2026-01-02 00:49:37.967035 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2026-01-02 00:49:37.967046 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2026-01-02 00:49:37.967058 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2026-01-02 00:49:37.967069 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2026-01-02 00:49:37.967080 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2026-01-02 00:49:37.967091 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2026-01-02 00:49:37.967101 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2026-01-02 00:49:37.967112 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-01-02 00:49:37.967124 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-01-02 00:49:37.967134 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-01-02 00:49:37.967145 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-01-02 00:49:37.967157 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-01-02 00:49:37.967168 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-01-02 00:49:37.967179 | orchestrator | 2026-01-02 00:49:37.967193 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2026-01-02 00:49:37.967205 | orchestrator | Friday 02 January 2026 00:49:20 +0000 (0:00:08.841) 0:00:53.127 ******** 2026-01-02 00:49:37.967218 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2026-01-02 00:49:37.967239 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:49:37.967253 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2026-01-02 00:49:37.967265 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:49:37.967277 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2026-01-02 00:49:37.967290 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:49:37.967303 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2026-01-02 00:49:37.967315 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2026-01-02 00:49:37.967328 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2026-01-02 00:49:37.967340 | orchestrator | 2026-01-02 00:49:37.967353 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2026-01-02 00:49:37.967366 | orchestrator | Friday 02 January 2026 00:49:23 +0000 (0:00:02.587) 0:00:55.715 ******** 2026-01-02 00:49:37.967379 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2026-01-02 00:49:37.967392 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:49:37.967404 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2026-01-02 00:49:37.967418 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:49:37.967430 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2026-01-02 00:49:37.967442 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:49:37.967456 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2026-01-02 00:49:37.967475 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2026-01-02 00:49:37.967489 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2026-01-02 00:49:37.967500 | orchestrator | 2026-01-02 00:49:37.967511 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-01-02 00:49:37.967523 | orchestrator | Friday 02 January 2026 00:49:27 +0000 (0:00:04.382) 0:01:00.097 ******** 2026-01-02 00:49:37.967534 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:49:37.967545 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:49:37.967556 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:49:37.967566 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:49:37.967576 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:49:37.967585 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:49:37.967595 | orchestrator | 2026-01-02 00:49:37.967605 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-02 00:49:37.967615 | orchestrator | testbed-node-0 : ok=16  changed=12  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-01-02 00:49:37.967627 | orchestrator | testbed-node-1 : ok=16  changed=12  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-01-02 00:49:37.967637 | orchestrator | testbed-node-2 : ok=16  changed=12  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-01-02 00:49:37.967647 | orchestrator | testbed-node-3 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-02 00:49:37.967657 | orchestrator | testbed-node-4 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-02 00:49:37.967685 | orchestrator | testbed-node-5 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-02 00:49:37.967695 | orchestrator | 2026-01-02 00:49:37.967705 | orchestrator | 2026-01-02 00:49:37.967715 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-02 00:49:37.967725 | orchestrator | Friday 02 January 2026 00:49:35 +0000 (0:00:07.608) 0:01:07.705 ******** 2026-01-02 00:49:37.967735 | orchestrator | =============================================================================== 2026-01-02 00:49:37.967745 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 17.97s 2026-01-02 00:49:37.967760 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------ 10.12s 2026-01-02 00:49:37.967770 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 8.84s 2026-01-02 00:49:37.967780 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 4.38s 2026-01-02 00:49:37.967790 | orchestrator | openvswitch : Copying over config.json files for services --------------- 2.99s 2026-01-02 00:49:37.967799 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 2.59s 2026-01-02 00:49:37.967809 | orchestrator | service-check-containers : openvswitch | Check containers --------------- 2.52s 2026-01-02 00:49:37.967819 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 2.12s 2026-01-02 00:49:37.967829 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 1.89s 2026-01-02 00:49:37.967839 | orchestrator | module-load : Load modules ---------------------------------------------- 1.86s 2026-01-02 00:49:37.967848 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 1.72s 2026-01-02 00:49:37.967858 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 1.68s 2026-01-02 00:49:37.967867 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 1.62s 2026-01-02 00:49:37.967877 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.52s 2026-01-02 00:49:37.967887 | orchestrator | module-load : Drop module persistence ----------------------------------- 1.41s 2026-01-02 00:49:37.967897 | orchestrator | openvswitch : include_tasks --------------------------------------------- 1.11s 2026-01-02 00:49:37.967906 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.83s 2026-01-02 00:49:37.967916 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.81s 2026-01-02 00:49:37.967941 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 0.67s 2026-01-02 00:49:37.967951 | orchestrator | service-check-containers : openvswitch | Notify handlers to restart containers --- 0.65s 2026-01-02 00:49:37.967961 | orchestrator | 2026-01-02 00:49:37 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:49:37.967971 | orchestrator | 2026-01-02 00:49:37 | INFO  | Task b8f19085-f3f4-4e6f-976f-6e69bc40ef39 is in state STARTED 2026-01-02 00:49:37.967981 | orchestrator | 2026-01-02 00:49:37 | INFO  | Task ae7cb1e4-503c-476f-9517-35d3597afcab is in state STARTED 2026-01-02 00:49:37.967991 | orchestrator | 2026-01-02 00:49:37 | INFO  | Task a72f01af-241f-4d96-85d6-3f82360415c7 is in state STARTED 2026-01-02 00:49:37.968001 | orchestrator | 2026-01-02 00:49:37 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:49:41.037185 | orchestrator | 2026-01-02 00:49:41 | INFO  | Task c8f607a5-1f4d-45bc-9b0c-2a7f5ff91896 is in state STARTED 2026-01-02 00:49:41.038132 | orchestrator | 2026-01-02 00:49:41 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:49:41.038962 | orchestrator | 2026-01-02 00:49:41 | INFO  | Task b8f19085-f3f4-4e6f-976f-6e69bc40ef39 is in state STARTED 2026-01-02 00:49:41.040835 | orchestrator | 2026-01-02 00:49:41 | INFO  | Task ae7cb1e4-503c-476f-9517-35d3597afcab is in state STARTED 2026-01-02 00:49:41.041686 | orchestrator | 2026-01-02 00:49:41 | INFO  | Task a72f01af-241f-4d96-85d6-3f82360415c7 is in state STARTED 2026-01-02 00:49:41.041742 | orchestrator | 2026-01-02 00:49:41 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:49:44.083637 | orchestrator | 2026-01-02 00:49:44 | INFO  | Task c8f607a5-1f4d-45bc-9b0c-2a7f5ff91896 is in state STARTED 2026-01-02 00:49:44.089496 | orchestrator | 2026-01-02 00:49:44 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:49:44.090601 | orchestrator | 2026-01-02 00:49:44 | INFO  | Task b8f19085-f3f4-4e6f-976f-6e69bc40ef39 is in state STARTED 2026-01-02 00:49:44.093175 | orchestrator | 2026-01-02 00:49:44 | INFO  | Task ae7cb1e4-503c-476f-9517-35d3597afcab is in state STARTED 2026-01-02 00:49:44.095306 | orchestrator | 2026-01-02 00:49:44 | INFO  | Task a72f01af-241f-4d96-85d6-3f82360415c7 is in state STARTED 2026-01-02 00:49:44.095517 | orchestrator | 2026-01-02 00:49:44 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:49:47.140751 | orchestrator | 2026-01-02 00:49:47 | INFO  | Task c8f607a5-1f4d-45bc-9b0c-2a7f5ff91896 is in state STARTED 2026-01-02 00:49:47.141474 | orchestrator | 2026-01-02 00:49:47 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:49:47.142502 | orchestrator | 2026-01-02 00:49:47 | INFO  | Task b8f19085-f3f4-4e6f-976f-6e69bc40ef39 is in state STARTED 2026-01-02 00:49:47.144046 | orchestrator | 2026-01-02 00:49:47 | INFO  | Task ae7cb1e4-503c-476f-9517-35d3597afcab is in state STARTED 2026-01-02 00:49:47.144225 | orchestrator | 2026-01-02 00:49:47 | INFO  | Task a72f01af-241f-4d96-85d6-3f82360415c7 is in state STARTED 2026-01-02 00:49:47.144996 | orchestrator | 2026-01-02 00:49:47 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:49:50.182337 | orchestrator | 2026-01-02 00:49:50 | INFO  | Task c8f607a5-1f4d-45bc-9b0c-2a7f5ff91896 is in state STARTED 2026-01-02 00:49:50.182411 | orchestrator | 2026-01-02 00:49:50 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:49:50.183104 | orchestrator | 2026-01-02 00:49:50 | INFO  | Task b8f19085-f3f4-4e6f-976f-6e69bc40ef39 is in state STARTED 2026-01-02 00:49:50.184710 | orchestrator | 2026-01-02 00:49:50 | INFO  | Task ae7cb1e4-503c-476f-9517-35d3597afcab is in state STARTED 2026-01-02 00:49:50.185540 | orchestrator | 2026-01-02 00:49:50 | INFO  | Task a72f01af-241f-4d96-85d6-3f82360415c7 is in state STARTED 2026-01-02 00:49:50.185588 | orchestrator | 2026-01-02 00:49:50 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:49:53.298798 | orchestrator | 2026-01-02 00:49:53 | INFO  | Task c8f607a5-1f4d-45bc-9b0c-2a7f5ff91896 is in state STARTED 2026-01-02 00:49:53.299996 | orchestrator | 2026-01-02 00:49:53 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:49:53.301083 | orchestrator | 2026-01-02 00:49:53 | INFO  | Task b8f19085-f3f4-4e6f-976f-6e69bc40ef39 is in state STARTED 2026-01-02 00:49:53.302105 | orchestrator | 2026-01-02 00:49:53 | INFO  | Task ae7cb1e4-503c-476f-9517-35d3597afcab is in state STARTED 2026-01-02 00:49:53.303231 | orchestrator | 2026-01-02 00:49:53 | INFO  | Task a72f01af-241f-4d96-85d6-3f82360415c7 is in state STARTED 2026-01-02 00:49:53.303276 | orchestrator | 2026-01-02 00:49:53 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:49:56.345171 | orchestrator | 2026-01-02 00:49:56 | INFO  | Task c8f607a5-1f4d-45bc-9b0c-2a7f5ff91896 is in state STARTED 2026-01-02 00:49:56.345513 | orchestrator | 2026-01-02 00:49:56 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:49:56.347243 | orchestrator | 2026-01-02 00:49:56 | INFO  | Task b8f19085-f3f4-4e6f-976f-6e69bc40ef39 is in state STARTED 2026-01-02 00:49:56.347993 | orchestrator | 2026-01-02 00:49:56 | INFO  | Task ae7cb1e4-503c-476f-9517-35d3597afcab is in state STARTED 2026-01-02 00:49:56.348742 | orchestrator | 2026-01-02 00:49:56 | INFO  | Task a72f01af-241f-4d96-85d6-3f82360415c7 is in state STARTED 2026-01-02 00:49:56.348783 | orchestrator | 2026-01-02 00:49:56 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:49:59.527222 | orchestrator | 2026-01-02 00:49:59 | INFO  | Task c8f607a5-1f4d-45bc-9b0c-2a7f5ff91896 is in state STARTED 2026-01-02 00:49:59.527297 | orchestrator | 2026-01-02 00:49:59 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:49:59.527306 | orchestrator | 2026-01-02 00:49:59 | INFO  | Task b8f19085-f3f4-4e6f-976f-6e69bc40ef39 is in state STARTED 2026-01-02 00:49:59.527312 | orchestrator | 2026-01-02 00:49:59 | INFO  | Task ae7cb1e4-503c-476f-9517-35d3597afcab is in state STARTED 2026-01-02 00:49:59.527318 | orchestrator | 2026-01-02 00:49:59 | INFO  | Task a72f01af-241f-4d96-85d6-3f82360415c7 is in state STARTED 2026-01-02 00:49:59.527324 | orchestrator | 2026-01-02 00:49:59 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:50:02.513749 | orchestrator | 2026-01-02 00:50:02 | INFO  | Task c8f607a5-1f4d-45bc-9b0c-2a7f5ff91896 is in state STARTED 2026-01-02 00:50:02.515769 | orchestrator | 2026-01-02 00:50:02 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:50:02.515798 | orchestrator | 2026-01-02 00:50:02 | INFO  | Task b8f19085-f3f4-4e6f-976f-6e69bc40ef39 is in state STARTED 2026-01-02 00:50:02.518651 | orchestrator | 2026-01-02 00:50:02 | INFO  | Task ae7cb1e4-503c-476f-9517-35d3597afcab is in state STARTED 2026-01-02 00:50:02.518980 | orchestrator | 2026-01-02 00:50:02 | INFO  | Task a72f01af-241f-4d96-85d6-3f82360415c7 is in state STARTED 2026-01-02 00:50:02.518992 | orchestrator | 2026-01-02 00:50:02 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:50:05.624851 | orchestrator | 2026-01-02 00:50:05 | INFO  | Task c8f607a5-1f4d-45bc-9b0c-2a7f5ff91896 is in state STARTED 2026-01-02 00:50:05.626262 | orchestrator | 2026-01-02 00:50:05 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:50:05.627098 | orchestrator | 2026-01-02 00:50:05 | INFO  | Task b8f19085-f3f4-4e6f-976f-6e69bc40ef39 is in state STARTED 2026-01-02 00:50:05.628517 | orchestrator | 2026-01-02 00:50:05 | INFO  | Task ae7cb1e4-503c-476f-9517-35d3597afcab is in state STARTED 2026-01-02 00:50:05.629320 | orchestrator | 2026-01-02 00:50:05 | INFO  | Task a72f01af-241f-4d96-85d6-3f82360415c7 is in state STARTED 2026-01-02 00:50:05.629334 | orchestrator | 2026-01-02 00:50:05 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:50:08.669460 | orchestrator | 2026-01-02 00:50:08 | INFO  | Task c8f607a5-1f4d-45bc-9b0c-2a7f5ff91896 is in state STARTED 2026-01-02 00:50:08.669556 | orchestrator | 2026-01-02 00:50:08 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:50:08.681772 | orchestrator | 2026-01-02 00:50:08 | INFO  | Task b8f19085-f3f4-4e6f-976f-6e69bc40ef39 is in state STARTED 2026-01-02 00:50:08.681861 | orchestrator | 2026-01-02 00:50:08 | INFO  | Task ae7cb1e4-503c-476f-9517-35d3597afcab is in state STARTED 2026-01-02 00:50:08.681870 | orchestrator | 2026-01-02 00:50:08 | INFO  | Task a72f01af-241f-4d96-85d6-3f82360415c7 is in state STARTED 2026-01-02 00:50:08.681876 | orchestrator | 2026-01-02 00:50:08 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:50:11.721727 | orchestrator | 2026-01-02 00:50:11 | INFO  | Task c8f607a5-1f4d-45bc-9b0c-2a7f5ff91896 is in state STARTED 2026-01-02 00:50:11.722668 | orchestrator | 2026-01-02 00:50:11 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:50:11.723445 | orchestrator | 2026-01-02 00:50:11 | INFO  | Task b8f19085-f3f4-4e6f-976f-6e69bc40ef39 is in state STARTED 2026-01-02 00:50:11.724326 | orchestrator | 2026-01-02 00:50:11 | INFO  | Task ae7cb1e4-503c-476f-9517-35d3597afcab is in state STARTED 2026-01-02 00:50:11.725322 | orchestrator | 2026-01-02 00:50:11 | INFO  | Task a72f01af-241f-4d96-85d6-3f82360415c7 is in state STARTED 2026-01-02 00:50:11.725374 | orchestrator | 2026-01-02 00:50:11 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:50:14.972999 | orchestrator | 2026-01-02 00:50:14 | INFO  | Task c8f607a5-1f4d-45bc-9b0c-2a7f5ff91896 is in state STARTED 2026-01-02 00:50:14.976640 | orchestrator | 2026-01-02 00:50:14 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:50:14.977497 | orchestrator | 2026-01-02 00:50:14 | INFO  | Task b8f19085-f3f4-4e6f-976f-6e69bc40ef39 is in state STARTED 2026-01-02 00:50:14.978156 | orchestrator | 2026-01-02 00:50:14 | INFO  | Task ae7cb1e4-503c-476f-9517-35d3597afcab is in state STARTED 2026-01-02 00:50:14.979000 | orchestrator | 2026-01-02 00:50:14 | INFO  | Task a72f01af-241f-4d96-85d6-3f82360415c7 is in state STARTED 2026-01-02 00:50:14.979038 | orchestrator | 2026-01-02 00:50:14 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:50:18.055689 | orchestrator | 2026-01-02 00:50:18 | INFO  | Task c8f607a5-1f4d-45bc-9b0c-2a7f5ff91896 is in state STARTED 2026-01-02 00:50:18.056060 | orchestrator | 2026-01-02 00:50:18 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:50:18.056783 | orchestrator | 2026-01-02 00:50:18 | INFO  | Task b8f19085-f3f4-4e6f-976f-6e69bc40ef39 is in state STARTED 2026-01-02 00:50:18.057967 | orchestrator | 2026-01-02 00:50:18 | INFO  | Task ae7cb1e4-503c-476f-9517-35d3597afcab is in state STARTED 2026-01-02 00:50:18.058686 | orchestrator | 2026-01-02 00:50:18 | INFO  | Task a72f01af-241f-4d96-85d6-3f82360415c7 is in state STARTED 2026-01-02 00:50:18.058729 | orchestrator | 2026-01-02 00:50:18 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:50:21.134078 | orchestrator | 2026-01-02 00:50:21 | INFO  | Task c8f607a5-1f4d-45bc-9b0c-2a7f5ff91896 is in state STARTED 2026-01-02 00:50:21.134981 | orchestrator | 2026-01-02 00:50:21 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:50:21.135491 | orchestrator | 2026-01-02 00:50:21 | INFO  | Task b8f19085-f3f4-4e6f-976f-6e69bc40ef39 is in state STARTED 2026-01-02 00:50:21.136098 | orchestrator | 2026-01-02 00:50:21 | INFO  | Task ae7cb1e4-503c-476f-9517-35d3597afcab is in state STARTED 2026-01-02 00:50:21.136699 | orchestrator | 2026-01-02 00:50:21 | INFO  | Task a72f01af-241f-4d96-85d6-3f82360415c7 is in state STARTED 2026-01-02 00:50:21.136798 | orchestrator | 2026-01-02 00:50:21 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:50:24.177738 | orchestrator | 2026-01-02 00:50:24 | INFO  | Task c8f607a5-1f4d-45bc-9b0c-2a7f5ff91896 is in state SUCCESS 2026-01-02 00:50:24.178694 | orchestrator | 2026-01-02 00:50:24.178738 | orchestrator | 2026-01-02 00:50:24.178754 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2026-01-02 00:50:24.178770 | orchestrator | 2026-01-02 00:50:24.178784 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2026-01-02 00:50:24.178798 | orchestrator | Friday 02 January 2026 00:46:06 +0000 (0:00:00.161) 0:00:00.161 ******** 2026-01-02 00:50:24.179013 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:50:24.179033 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:50:24.179046 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:50:24.179060 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:50:24.179075 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:50:24.179089 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:50:24.179103 | orchestrator | 2026-01-02 00:50:24.179116 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2026-01-02 00:50:24.179129 | orchestrator | Friday 02 January 2026 00:46:06 +0000 (0:00:00.613) 0:00:00.775 ******** 2026-01-02 00:50:24.179143 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:50:24.179157 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:50:24.179171 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:50:24.179211 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:50:24.179224 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:50:24.179238 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:50:24.179250 | orchestrator | 2026-01-02 00:50:24.179264 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2026-01-02 00:50:24.179278 | orchestrator | Friday 02 January 2026 00:46:07 +0000 (0:00:00.557) 0:00:01.332 ******** 2026-01-02 00:50:24.179292 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:50:24.179305 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:50:24.179318 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:50:24.179331 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:50:24.179344 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:50:24.179358 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:50:24.179371 | orchestrator | 2026-01-02 00:50:24.179384 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2026-01-02 00:50:24.179397 | orchestrator | Friday 02 January 2026 00:46:07 +0000 (0:00:00.656) 0:00:01.989 ******** 2026-01-02 00:50:24.179411 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:50:24.179423 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:50:24.179436 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:50:24.179448 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:50:24.179460 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:50:24.179473 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:50:24.179485 | orchestrator | 2026-01-02 00:50:24.179499 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2026-01-02 00:50:24.179513 | orchestrator | Friday 02 January 2026 00:46:10 +0000 (0:00:02.876) 0:00:04.865 ******** 2026-01-02 00:50:24.179527 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:50:24.179541 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:50:24.179553 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:50:24.179566 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:50:24.179580 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:50:24.179593 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:50:24.179606 | orchestrator | 2026-01-02 00:50:24.179620 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2026-01-02 00:50:24.179633 | orchestrator | Friday 02 January 2026 00:46:11 +0000 (0:00:01.074) 0:00:05.940 ******** 2026-01-02 00:50:24.179647 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:50:24.179660 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:50:24.179674 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:50:24.179688 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:50:24.179702 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:50:24.179716 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:50:24.179729 | orchestrator | 2026-01-02 00:50:24.179743 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2026-01-02 00:50:24.179756 | orchestrator | Friday 02 January 2026 00:46:13 +0000 (0:00:01.197) 0:00:07.138 ******** 2026-01-02 00:50:24.179770 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:50:24.179782 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:50:24.179795 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:50:24.179808 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:50:24.179820 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:50:24.179833 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:50:24.179845 | orchestrator | 2026-01-02 00:50:24.179858 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2026-01-02 00:50:24.179872 | orchestrator | Friday 02 January 2026 00:46:14 +0000 (0:00:01.309) 0:00:08.447 ******** 2026-01-02 00:50:24.179886 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:50:24.179923 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:50:24.179939 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:50:24.179954 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:50:24.179969 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:50:24.179984 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:50:24.180011 | orchestrator | 2026-01-02 00:50:24.180026 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2026-01-02 00:50:24.180040 | orchestrator | Friday 02 January 2026 00:46:15 +0000 (0:00:00.948) 0:00:09.396 ******** 2026-01-02 00:50:24.180054 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-02 00:50:24.180068 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-02 00:50:24.180083 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:50:24.180098 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-02 00:50:24.180112 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-02 00:50:24.180150 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:50:24.180164 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-02 00:50:24.180178 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-02 00:50:24.180192 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:50:24.180215 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-02 00:50:24.180241 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-02 00:50:24.180254 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:50:24.180267 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-02 00:50:24.180281 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-02 00:50:24.180295 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:50:24.180308 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-02 00:50:24.180322 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-02 00:50:24.180334 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:50:24.180347 | orchestrator | 2026-01-02 00:50:24.180359 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2026-01-02 00:50:24.180373 | orchestrator | Friday 02 January 2026 00:46:16 +0000 (0:00:00.789) 0:00:10.185 ******** 2026-01-02 00:50:24.180387 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:50:24.180400 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:50:24.180413 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:50:24.180426 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:50:24.180439 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:50:24.180453 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:50:24.180467 | orchestrator | 2026-01-02 00:50:24.180480 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2026-01-02 00:50:24.180494 | orchestrator | Friday 02 January 2026 00:46:17 +0000 (0:00:01.128) 0:00:11.313 ******** 2026-01-02 00:50:24.180507 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:50:24.180521 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:50:24.180534 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:50:24.180547 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:50:24.180560 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:50:24.180572 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:50:24.180585 | orchestrator | 2026-01-02 00:50:24.180598 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2026-01-02 00:50:24.180612 | orchestrator | Friday 02 January 2026 00:46:17 +0000 (0:00:00.712) 0:00:12.025 ******** 2026-01-02 00:50:24.180625 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:50:24.180639 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:50:24.180652 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:50:24.180666 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:50:24.180679 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:50:24.180693 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:50:24.180707 | orchestrator | 2026-01-02 00:50:24.180720 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2026-01-02 00:50:24.180742 | orchestrator | Friday 02 January 2026 00:46:23 +0000 (0:00:05.579) 0:00:17.604 ******** 2026-01-02 00:50:24.180755 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:50:24.180768 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:50:24.180944 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:50:24.180960 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:50:24.180973 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:50:24.180986 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:50:24.181000 | orchestrator | 2026-01-02 00:50:24.181013 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2026-01-02 00:50:24.181022 | orchestrator | Friday 02 January 2026 00:46:25 +0000 (0:00:01.588) 0:00:19.193 ******** 2026-01-02 00:50:24.181030 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:50:24.181037 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:50:24.181051 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:50:24.181064 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:50:24.181077 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:50:24.181090 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:50:24.181104 | orchestrator | 2026-01-02 00:50:24.181119 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2026-01-02 00:50:24.181133 | orchestrator | Friday 02 January 2026 00:46:27 +0000 (0:00:02.210) 0:00:21.403 ******** 2026-01-02 00:50:24.181141 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:50:24.181149 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:50:24.181157 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:50:24.181164 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:50:24.181172 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:50:24.181180 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:50:24.181187 | orchestrator | 2026-01-02 00:50:24.181194 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2026-01-02 00:50:24.181200 | orchestrator | Friday 02 January 2026 00:46:28 +0000 (0:00:00.853) 0:00:22.256 ******** 2026-01-02 00:50:24.181207 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2026-01-02 00:50:24.181214 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2026-01-02 00:50:24.181220 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:50:24.181227 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2026-01-02 00:50:24.181233 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2026-01-02 00:50:24.181240 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:50:24.181246 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2026-01-02 00:50:24.181253 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2026-01-02 00:50:24.181259 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:50:24.181266 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2026-01-02 00:50:24.181272 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2026-01-02 00:50:24.181279 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:50:24.181285 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2026-01-02 00:50:24.181292 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2026-01-02 00:50:24.181298 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:50:24.181305 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2026-01-02 00:50:24.181311 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2026-01-02 00:50:24.181318 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:50:24.181324 | orchestrator | 2026-01-02 00:50:24.181337 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2026-01-02 00:50:24.181354 | orchestrator | Friday 02 January 2026 00:46:29 +0000 (0:00:00.975) 0:00:23.232 ******** 2026-01-02 00:50:24.181361 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:50:24.181368 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:50:24.181374 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:50:24.181381 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:50:24.181394 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:50:24.181401 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:50:24.181408 | orchestrator | 2026-01-02 00:50:24.181414 | orchestrator | TASK [k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured] *** 2026-01-02 00:50:24.181421 | orchestrator | Friday 02 January 2026 00:46:29 +0000 (0:00:00.794) 0:00:24.027 ******** 2026-01-02 00:50:24.181428 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:50:24.181434 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:50:24.181441 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:50:24.181447 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:50:24.181454 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:50:24.181460 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:50:24.181467 | orchestrator | 2026-01-02 00:50:24.181473 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2026-01-02 00:50:24.181480 | orchestrator | 2026-01-02 00:50:24.181486 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2026-01-02 00:50:24.181493 | orchestrator | Friday 02 January 2026 00:46:31 +0000 (0:00:01.229) 0:00:25.257 ******** 2026-01-02 00:50:24.181499 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:50:24.181506 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:50:24.181513 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:50:24.181519 | orchestrator | 2026-01-02 00:50:24.181527 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2026-01-02 00:50:24.181535 | orchestrator | Friday 02 January 2026 00:46:32 +0000 (0:00:01.679) 0:00:26.937 ******** 2026-01-02 00:50:24.181543 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:50:24.181551 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:50:24.181558 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:50:24.181566 | orchestrator | 2026-01-02 00:50:24.181574 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2026-01-02 00:50:24.181582 | orchestrator | Friday 02 January 2026 00:46:33 +0000 (0:00:01.036) 0:00:27.973 ******** 2026-01-02 00:50:24.181589 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:50:24.181597 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:50:24.181605 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:50:24.181613 | orchestrator | 2026-01-02 00:50:24.181620 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2026-01-02 00:50:24.181629 | orchestrator | Friday 02 January 2026 00:46:34 +0000 (0:00:00.862) 0:00:28.836 ******** 2026-01-02 00:50:24.181638 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:50:24.181648 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:50:24.181658 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:50:24.181669 | orchestrator | 2026-01-02 00:50:24.181680 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2026-01-02 00:50:24.181690 | orchestrator | Friday 02 January 2026 00:46:35 +0000 (0:00:00.659) 0:00:29.496 ******** 2026-01-02 00:50:24.181701 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:50:24.181713 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:50:24.181724 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:50:24.181735 | orchestrator | 2026-01-02 00:50:24.181747 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2026-01-02 00:50:24.181758 | orchestrator | Friday 02 January 2026 00:46:35 +0000 (0:00:00.478) 0:00:29.975 ******** 2026-01-02 00:50:24.181768 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:50:24.181779 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:50:24.181790 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:50:24.181801 | orchestrator | 2026-01-02 00:50:24.181811 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2026-01-02 00:50:24.181822 | orchestrator | Friday 02 January 2026 00:46:37 +0000 (0:00:01.173) 0:00:31.148 ******** 2026-01-02 00:50:24.181832 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:50:24.181844 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:50:24.181855 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:50:24.181867 | orchestrator | 2026-01-02 00:50:24.181887 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2026-01-02 00:50:24.181920 | orchestrator | Friday 02 January 2026 00:46:38 +0000 (0:00:01.329) 0:00:32.477 ******** 2026-01-02 00:50:24.181934 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:50:24.181942 | orchestrator | 2026-01-02 00:50:24.181951 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2026-01-02 00:50:24.181963 | orchestrator | Friday 02 January 2026 00:46:38 +0000 (0:00:00.418) 0:00:32.896 ******** 2026-01-02 00:50:24.181974 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:50:24.181985 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:50:24.181995 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:50:24.182006 | orchestrator | 2026-01-02 00:50:24.182070 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2026-01-02 00:50:24.182082 | orchestrator | Friday 02 January 2026 00:46:41 +0000 (0:00:02.284) 0:00:35.180 ******** 2026-01-02 00:50:24.182090 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:50:24.182102 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:50:24.182113 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:50:24.182125 | orchestrator | 2026-01-02 00:50:24.182136 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2026-01-02 00:50:24.182148 | orchestrator | Friday 02 January 2026 00:46:42 +0000 (0:00:01.085) 0:00:36.265 ******** 2026-01-02 00:50:24.182159 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:50:24.182170 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:50:24.182182 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:50:24.182191 | orchestrator | 2026-01-02 00:50:24.182199 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2026-01-02 00:50:24.182211 | orchestrator | Friday 02 January 2026 00:46:43 +0000 (0:00:01.110) 0:00:37.376 ******** 2026-01-02 00:50:24.182222 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:50:24.182233 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:50:24.182252 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:50:24.182264 | orchestrator | 2026-01-02 00:50:24.182275 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2026-01-02 00:50:24.182297 | orchestrator | Friday 02 January 2026 00:46:44 +0000 (0:00:01.312) 0:00:38.688 ******** 2026-01-02 00:50:24.182309 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:50:24.182321 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:50:24.182333 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:50:24.182344 | orchestrator | 2026-01-02 00:50:24.182356 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2026-01-02 00:50:24.182368 | orchestrator | Friday 02 January 2026 00:46:45 +0000 (0:00:00.614) 0:00:39.303 ******** 2026-01-02 00:50:24.182379 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:50:24.182390 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:50:24.182401 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:50:24.182412 | orchestrator | 2026-01-02 00:50:24.182423 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2026-01-02 00:50:24.182434 | orchestrator | Friday 02 January 2026 00:46:45 +0000 (0:00:00.597) 0:00:39.900 ******** 2026-01-02 00:50:24.182446 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:50:24.182457 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:50:24.182464 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:50:24.182471 | orchestrator | 2026-01-02 00:50:24.182478 | orchestrator | TASK [k3s_server : Detect Kubernetes version for label compatibility] ********** 2026-01-02 00:50:24.182484 | orchestrator | Friday 02 January 2026 00:46:47 +0000 (0:00:01.606) 0:00:41.507 ******** 2026-01-02 00:50:24.182491 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:50:24.182498 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:50:24.182504 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:50:24.182511 | orchestrator | 2026-01-02 00:50:24.182518 | orchestrator | TASK [k3s_server : Set node role label selector based on Kubernetes version] *** 2026-01-02 00:50:24.182524 | orchestrator | Friday 02 January 2026 00:46:50 +0000 (0:00:03.467) 0:00:44.975 ******** 2026-01-02 00:50:24.182538 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:50:24.182544 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:50:24.182551 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:50:24.182557 | orchestrator | 2026-01-02 00:50:24.182564 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2026-01-02 00:50:24.182571 | orchestrator | Friday 02 January 2026 00:46:51 +0000 (0:00:00.732) 0:00:45.708 ******** 2026-01-02 00:50:24.182578 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-01-02 00:50:24.182585 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-01-02 00:50:24.182592 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-01-02 00:50:24.182599 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-01-02 00:50:24.182606 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-01-02 00:50:24.182612 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-01-02 00:50:24.182619 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-01-02 00:50:24.182626 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-01-02 00:50:24.182632 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-01-02 00:50:24.182639 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-01-02 00:50:24.182646 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-01-02 00:50:24.182652 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-01-02 00:50:24.182659 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:50:24.182666 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:50:24.182672 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:50:24.182679 | orchestrator | 2026-01-02 00:50:24.182686 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2026-01-02 00:50:24.182692 | orchestrator | Friday 02 January 2026 00:47:35 +0000 (0:00:43.577) 0:01:29.285 ******** 2026-01-02 00:50:24.182699 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:50:24.182706 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:50:24.182712 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:50:24.182719 | orchestrator | 2026-01-02 00:50:24.182726 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2026-01-02 00:50:24.182732 | orchestrator | Friday 02 January 2026 00:47:35 +0000 (0:00:00.265) 0:01:29.550 ******** 2026-01-02 00:50:24.182739 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:50:24.182746 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:50:24.182752 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:50:24.182759 | orchestrator | 2026-01-02 00:50:24.182765 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2026-01-02 00:50:24.182772 | orchestrator | Friday 02 January 2026 00:47:36 +0000 (0:00:00.912) 0:01:30.463 ******** 2026-01-02 00:50:24.182783 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:50:24.182789 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:50:24.182796 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:50:24.182807 | orchestrator | 2026-01-02 00:50:24.182818 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2026-01-02 00:50:24.182825 | orchestrator | Friday 02 January 2026 00:47:37 +0000 (0:00:01.157) 0:01:31.621 ******** 2026-01-02 00:50:24.182832 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:50:24.182838 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:50:24.182845 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:50:24.182851 | orchestrator | 2026-01-02 00:50:24.182858 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2026-01-02 00:50:24.182865 | orchestrator | Friday 02 January 2026 00:48:02 +0000 (0:00:24.816) 0:01:56.438 ******** 2026-01-02 00:50:24.182872 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:50:24.182878 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:50:24.182885 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:50:24.182891 | orchestrator | 2026-01-02 00:50:24.182922 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2026-01-02 00:50:24.182931 | orchestrator | Friday 02 January 2026 00:48:02 +0000 (0:00:00.602) 0:01:57.041 ******** 2026-01-02 00:50:24.182938 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:50:24.182944 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:50:24.182951 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:50:24.182957 | orchestrator | 2026-01-02 00:50:24.182964 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2026-01-02 00:50:24.182971 | orchestrator | Friday 02 January 2026 00:48:03 +0000 (0:00:00.598) 0:01:57.639 ******** 2026-01-02 00:50:24.182978 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:50:24.182984 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:50:24.182991 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:50:24.182998 | orchestrator | 2026-01-02 00:50:24.183005 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2026-01-02 00:50:24.183011 | orchestrator | Friday 02 January 2026 00:48:04 +0000 (0:00:00.556) 0:01:58.196 ******** 2026-01-02 00:50:24.183018 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:50:24.183025 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:50:24.183031 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:50:24.183038 | orchestrator | 2026-01-02 00:50:24.183045 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2026-01-02 00:50:24.183051 | orchestrator | Friday 02 January 2026 00:48:04 +0000 (0:00:00.763) 0:01:58.959 ******** 2026-01-02 00:50:24.183058 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:50:24.183065 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:50:24.183071 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:50:24.183078 | orchestrator | 2026-01-02 00:50:24.183084 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2026-01-02 00:50:24.183091 | orchestrator | Friday 02 January 2026 00:48:05 +0000 (0:00:00.274) 0:01:59.233 ******** 2026-01-02 00:50:24.183098 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:50:24.183104 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:50:24.183111 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:50:24.183118 | orchestrator | 2026-01-02 00:50:24.183124 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2026-01-02 00:50:24.183131 | orchestrator | Friday 02 January 2026 00:48:05 +0000 (0:00:00.619) 0:01:59.852 ******** 2026-01-02 00:50:24.183138 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:50:24.183144 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:50:24.183151 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:50:24.183158 | orchestrator | 2026-01-02 00:50:24.183164 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2026-01-02 00:50:24.183172 | orchestrator | Friday 02 January 2026 00:48:06 +0000 (0:00:00.566) 0:02:00.419 ******** 2026-01-02 00:50:24.183183 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:50:24.183194 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:50:24.183311 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:50:24.183320 | orchestrator | 2026-01-02 00:50:24.183327 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2026-01-02 00:50:24.183341 | orchestrator | Friday 02 January 2026 00:48:07 +0000 (0:00:00.957) 0:02:01.376 ******** 2026-01-02 00:50:24.183348 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:50:24.183354 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:50:24.183361 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:50:24.183368 | orchestrator | 2026-01-02 00:50:24.183375 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2026-01-02 00:50:24.183381 | orchestrator | Friday 02 January 2026 00:48:08 +0000 (0:00:00.785) 0:02:02.162 ******** 2026-01-02 00:50:24.183388 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:50:24.183395 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:50:24.183402 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:50:24.183408 | orchestrator | 2026-01-02 00:50:24.183415 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2026-01-02 00:50:24.183422 | orchestrator | Friday 02 January 2026 00:48:08 +0000 (0:00:00.267) 0:02:02.430 ******** 2026-01-02 00:50:24.183428 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:50:24.183435 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:50:24.183442 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:50:24.183449 | orchestrator | 2026-01-02 00:50:24.183455 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2026-01-02 00:50:24.183462 | orchestrator | Friday 02 January 2026 00:48:08 +0000 (0:00:00.383) 0:02:02.813 ******** 2026-01-02 00:50:24.183469 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:50:24.183476 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:50:24.183482 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:50:24.183489 | orchestrator | 2026-01-02 00:50:24.183496 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2026-01-02 00:50:24.183503 | orchestrator | Friday 02 January 2026 00:48:09 +0000 (0:00:00.725) 0:02:03.539 ******** 2026-01-02 00:50:24.183509 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:50:24.183516 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:50:24.183523 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:50:24.183529 | orchestrator | 2026-01-02 00:50:24.183536 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2026-01-02 00:50:24.183543 | orchestrator | Friday 02 January 2026 00:48:10 +0000 (0:00:00.633) 0:02:04.172 ******** 2026-01-02 00:50:24.183555 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-01-02 00:50:24.183568 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-01-02 00:50:24.183575 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-01-02 00:50:24.183582 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-01-02 00:50:24.183589 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-01-02 00:50:24.183596 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-01-02 00:50:24.183602 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-01-02 00:50:24.183609 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-01-02 00:50:24.183616 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-01-02 00:50:24.183623 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2026-01-02 00:50:24.183629 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-01-02 00:50:24.183636 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-01-02 00:50:24.183643 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2026-01-02 00:50:24.183659 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-01-02 00:50:24.183665 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-01-02 00:50:24.183672 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-01-02 00:50:24.183679 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-01-02 00:50:24.183686 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-01-02 00:50:24.183693 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-01-02 00:50:24.183699 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-01-02 00:50:24.183706 | orchestrator | 2026-01-02 00:50:24.183713 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2026-01-02 00:50:24.183720 | orchestrator | 2026-01-02 00:50:24.183727 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2026-01-02 00:50:24.183733 | orchestrator | Friday 02 January 2026 00:48:13 +0000 (0:00:03.068) 0:02:07.241 ******** 2026-01-02 00:50:24.183740 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:50:24.183747 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:50:24.183754 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:50:24.183760 | orchestrator | 2026-01-02 00:50:24.183767 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2026-01-02 00:50:24.183774 | orchestrator | Friday 02 January 2026 00:48:13 +0000 (0:00:00.453) 0:02:07.694 ******** 2026-01-02 00:50:24.183781 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:50:24.183788 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:50:24.183794 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:50:24.183801 | orchestrator | 2026-01-02 00:50:24.183808 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2026-01-02 00:50:24.183815 | orchestrator | Friday 02 January 2026 00:48:15 +0000 (0:00:01.760) 0:02:09.454 ******** 2026-01-02 00:50:24.183821 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:50:24.183828 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:50:24.183835 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:50:24.183842 | orchestrator | 2026-01-02 00:50:24.183848 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2026-01-02 00:50:24.183855 | orchestrator | Friday 02 January 2026 00:48:15 +0000 (0:00:00.315) 0:02:09.770 ******** 2026-01-02 00:50:24.183862 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-02 00:50:24.183869 | orchestrator | 2026-01-02 00:50:24.183875 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2026-01-02 00:50:24.183882 | orchestrator | Friday 02 January 2026 00:48:16 +0000 (0:00:00.640) 0:02:10.411 ******** 2026-01-02 00:50:24.183889 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:50:24.183896 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:50:24.183923 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:50:24.183930 | orchestrator | 2026-01-02 00:50:24.183938 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2026-01-02 00:50:24.183946 | orchestrator | Friday 02 January 2026 00:48:16 +0000 (0:00:00.329) 0:02:10.740 ******** 2026-01-02 00:50:24.183954 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:50:24.183962 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:50:24.183969 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:50:24.183976 | orchestrator | 2026-01-02 00:50:24.183984 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2026-01-02 00:50:24.183992 | orchestrator | Friday 02 January 2026 00:48:16 +0000 (0:00:00.302) 0:02:11.043 ******** 2026-01-02 00:50:24.184000 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:50:24.184007 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:50:24.184015 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:50:24.184028 | orchestrator | 2026-01-02 00:50:24.184036 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2026-01-02 00:50:24.184044 | orchestrator | Friday 02 January 2026 00:48:17 +0000 (0:00:00.356) 0:02:11.400 ******** 2026-01-02 00:50:24.184056 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:50:24.184064 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:50:24.184072 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:50:24.184079 | orchestrator | 2026-01-02 00:50:24.184091 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2026-01-02 00:50:24.184100 | orchestrator | Friday 02 January 2026 00:48:18 +0000 (0:00:00.961) 0:02:12.361 ******** 2026-01-02 00:50:24.184108 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:50:24.184115 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:50:24.184123 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:50:24.184131 | orchestrator | 2026-01-02 00:50:24.184138 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2026-01-02 00:50:24.184146 | orchestrator | Friday 02 January 2026 00:48:19 +0000 (0:00:01.325) 0:02:13.687 ******** 2026-01-02 00:50:24.184154 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:50:24.184161 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:50:24.184169 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:50:24.184176 | orchestrator | 2026-01-02 00:50:24.184184 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2026-01-02 00:50:24.184191 | orchestrator | Friday 02 January 2026 00:48:20 +0000 (0:00:01.335) 0:02:15.023 ******** 2026-01-02 00:50:24.184199 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:50:24.184207 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:50:24.184215 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:50:24.184222 | orchestrator | 2026-01-02 00:50:24.184230 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-01-02 00:50:24.184238 | orchestrator | 2026-01-02 00:50:24.184245 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-01-02 00:50:24.184252 | orchestrator | Friday 02 January 2026 00:48:31 +0000 (0:00:10.619) 0:02:25.643 ******** 2026-01-02 00:50:24.184260 | orchestrator | ok: [testbed-manager] 2026-01-02 00:50:24.184268 | orchestrator | 2026-01-02 00:50:24.184276 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-01-02 00:50:24.184284 | orchestrator | Friday 02 January 2026 00:48:32 +0000 (0:00:00.737) 0:02:26.380 ******** 2026-01-02 00:50:24.184292 | orchestrator | changed: [testbed-manager] 2026-01-02 00:50:24.184300 | orchestrator | 2026-01-02 00:50:24.184307 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-01-02 00:50:24.184314 | orchestrator | Friday 02 January 2026 00:48:32 +0000 (0:00:00.331) 0:02:26.712 ******** 2026-01-02 00:50:24.184321 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-01-02 00:50:24.184327 | orchestrator | 2026-01-02 00:50:24.184334 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-01-02 00:50:24.184341 | orchestrator | Friday 02 January 2026 00:48:33 +0000 (0:00:00.546) 0:02:27.258 ******** 2026-01-02 00:50:24.184348 | orchestrator | changed: [testbed-manager] 2026-01-02 00:50:24.184354 | orchestrator | 2026-01-02 00:50:24.184361 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-01-02 00:50:24.184368 | orchestrator | Friday 02 January 2026 00:48:33 +0000 (0:00:00.734) 0:02:27.993 ******** 2026-01-02 00:50:24.184375 | orchestrator | changed: [testbed-manager] 2026-01-02 00:50:24.184381 | orchestrator | 2026-01-02 00:50:24.184388 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-01-02 00:50:24.184395 | orchestrator | Friday 02 January 2026 00:48:34 +0000 (0:00:00.589) 0:02:28.582 ******** 2026-01-02 00:50:24.184402 | orchestrator | changed: [testbed-manager -> localhost] 2026-01-02 00:50:24.184408 | orchestrator | 2026-01-02 00:50:24.184415 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-01-02 00:50:24.184422 | orchestrator | Friday 02 January 2026 00:48:36 +0000 (0:00:01.591) 0:02:30.174 ******** 2026-01-02 00:50:24.184433 | orchestrator | changed: [testbed-manager -> localhost] 2026-01-02 00:50:24.184440 | orchestrator | 2026-01-02 00:50:24.184447 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-01-02 00:50:24.184454 | orchestrator | Friday 02 January 2026 00:48:36 +0000 (0:00:00.686) 0:02:30.861 ******** 2026-01-02 00:50:24.184460 | orchestrator | changed: [testbed-manager] 2026-01-02 00:50:24.184467 | orchestrator | 2026-01-02 00:50:24.184474 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-01-02 00:50:24.184481 | orchestrator | Friday 02 January 2026 00:48:37 +0000 (0:00:00.348) 0:02:31.209 ******** 2026-01-02 00:50:24.184487 | orchestrator | changed: [testbed-manager] 2026-01-02 00:50:24.184494 | orchestrator | 2026-01-02 00:50:24.184501 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2026-01-02 00:50:24.184507 | orchestrator | 2026-01-02 00:50:24.184514 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2026-01-02 00:50:24.184521 | orchestrator | Friday 02 January 2026 00:48:37 +0000 (0:00:00.593) 0:02:31.802 ******** 2026-01-02 00:50:24.184527 | orchestrator | ok: [testbed-manager] 2026-01-02 00:50:24.184534 | orchestrator | 2026-01-02 00:50:24.184541 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2026-01-02 00:50:24.184548 | orchestrator | Friday 02 January 2026 00:48:37 +0000 (0:00:00.109) 0:02:31.911 ******** 2026-01-02 00:50:24.184554 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2026-01-02 00:50:24.184561 | orchestrator | 2026-01-02 00:50:24.184568 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2026-01-02 00:50:24.184574 | orchestrator | Friday 02 January 2026 00:48:38 +0000 (0:00:00.180) 0:02:32.092 ******** 2026-01-02 00:50:24.184581 | orchestrator | ok: [testbed-manager] 2026-01-02 00:50:24.184588 | orchestrator | 2026-01-02 00:50:24.184594 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2026-01-02 00:50:24.184601 | orchestrator | Friday 02 January 2026 00:48:38 +0000 (0:00:00.797) 0:02:32.890 ******** 2026-01-02 00:50:24.184608 | orchestrator | ok: [testbed-manager] 2026-01-02 00:50:24.184614 | orchestrator | 2026-01-02 00:50:24.184621 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2026-01-02 00:50:24.184628 | orchestrator | Friday 02 January 2026 00:48:40 +0000 (0:00:01.339) 0:02:34.229 ******** 2026-01-02 00:50:24.184634 | orchestrator | changed: [testbed-manager] 2026-01-02 00:50:24.184641 | orchestrator | 2026-01-02 00:50:24.184648 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2026-01-02 00:50:24.184655 | orchestrator | Friday 02 January 2026 00:48:40 +0000 (0:00:00.706) 0:02:34.935 ******** 2026-01-02 00:50:24.184661 | orchestrator | ok: [testbed-manager] 2026-01-02 00:50:24.184668 | orchestrator | 2026-01-02 00:50:24.184678 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2026-01-02 00:50:24.184685 | orchestrator | Friday 02 January 2026 00:48:41 +0000 (0:00:00.509) 0:02:35.444 ******** 2026-01-02 00:50:24.184692 | orchestrator | changed: [testbed-manager] 2026-01-02 00:50:24.184699 | orchestrator | 2026-01-02 00:50:24.184706 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2026-01-02 00:50:24.184713 | orchestrator | Friday 02 January 2026 00:48:48 +0000 (0:00:07.066) 0:02:42.511 ******** 2026-01-02 00:50:24.184719 | orchestrator | changed: [testbed-manager] 2026-01-02 00:50:24.184726 | orchestrator | 2026-01-02 00:50:24.184733 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2026-01-02 00:50:24.184739 | orchestrator | Friday 02 January 2026 00:48:59 +0000 (0:00:10.836) 0:02:53.348 ******** 2026-01-02 00:50:24.184746 | orchestrator | ok: [testbed-manager] 2026-01-02 00:50:24.184753 | orchestrator | 2026-01-02 00:50:24.184760 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2026-01-02 00:50:24.184766 | orchestrator | 2026-01-02 00:50:24.184773 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2026-01-02 00:50:24.184780 | orchestrator | Friday 02 January 2026 00:48:59 +0000 (0:00:00.370) 0:02:53.718 ******** 2026-01-02 00:50:24.184791 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:50:24.184798 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:50:24.184805 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:50:24.184811 | orchestrator | 2026-01-02 00:50:24.184818 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2026-01-02 00:50:24.184825 | orchestrator | Friday 02 January 2026 00:48:59 +0000 (0:00:00.280) 0:02:53.999 ******** 2026-01-02 00:50:24.184833 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:50:24.184845 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:50:24.184856 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:50:24.184867 | orchestrator | 2026-01-02 00:50:24.184878 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2026-01-02 00:50:24.184888 | orchestrator | Friday 02 January 2026 00:49:00 +0000 (0:00:00.299) 0:02:54.299 ******** 2026-01-02 00:50:24.184919 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:50:24.184931 | orchestrator | 2026-01-02 00:50:24.184944 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2026-01-02 00:50:24.184951 | orchestrator | Friday 02 January 2026 00:49:01 +0000 (0:00:00.786) 0:02:55.085 ******** 2026-01-02 00:50:24.184958 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-01-02 00:50:24.184964 | orchestrator | 2026-01-02 00:50:24.184971 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2026-01-02 00:50:24.184978 | orchestrator | Friday 02 January 2026 00:49:02 +0000 (0:00:01.818) 0:02:56.903 ******** 2026-01-02 00:50:24.184985 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-02 00:50:24.184991 | orchestrator | 2026-01-02 00:50:24.184998 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2026-01-02 00:50:24.185005 | orchestrator | Friday 02 January 2026 00:49:04 +0000 (0:00:01.609) 0:02:58.513 ******** 2026-01-02 00:50:24.185011 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:50:24.185018 | orchestrator | 2026-01-02 00:50:24.185025 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2026-01-02 00:50:24.185031 | orchestrator | Friday 02 January 2026 00:49:04 +0000 (0:00:00.120) 0:02:58.633 ******** 2026-01-02 00:50:24.185038 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-02 00:50:24.185047 | orchestrator | 2026-01-02 00:50:24.185058 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2026-01-02 00:50:24.185069 | orchestrator | Friday 02 January 2026 00:49:05 +0000 (0:00:00.932) 0:02:59.566 ******** 2026-01-02 00:50:24.185080 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:50:24.185090 | orchestrator | 2026-01-02 00:50:24.185101 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2026-01-02 00:50:24.185112 | orchestrator | Friday 02 January 2026 00:49:05 +0000 (0:00:00.168) 0:02:59.734 ******** 2026-01-02 00:50:24.185122 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:50:24.185133 | orchestrator | 2026-01-02 00:50:24.185143 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2026-01-02 00:50:24.185154 | orchestrator | Friday 02 January 2026 00:49:05 +0000 (0:00:00.125) 0:02:59.860 ******** 2026-01-02 00:50:24.185247 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:50:24.185274 | orchestrator | 2026-01-02 00:50:24.185287 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2026-01-02 00:50:24.185298 | orchestrator | Friday 02 January 2026 00:49:05 +0000 (0:00:00.101) 0:02:59.961 ******** 2026-01-02 00:50:24.185310 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:50:24.185321 | orchestrator | 2026-01-02 00:50:24.185333 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2026-01-02 00:50:24.185344 | orchestrator | Friday 02 January 2026 00:49:06 +0000 (0:00:00.134) 0:03:00.096 ******** 2026-01-02 00:50:24.185356 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-01-02 00:50:24.185367 | orchestrator | 2026-01-02 00:50:24.185379 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2026-01-02 00:50:24.185396 | orchestrator | Friday 02 January 2026 00:49:10 +0000 (0:00:04.645) 0:03:04.741 ******** 2026-01-02 00:50:24.185409 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2026-01-02 00:50:24.185416 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (30 retries left). 2026-01-02 00:50:24.185422 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2026-01-02 00:50:24.185429 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2026-01-02 00:50:24.185439 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2026-01-02 00:50:24.185450 | orchestrator | 2026-01-02 00:50:24.185466 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2026-01-02 00:50:24.185478 | orchestrator | Friday 02 January 2026 00:49:59 +0000 (0:00:48.748) 0:03:53.489 ******** 2026-01-02 00:50:24.185498 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-02 00:50:24.185510 | orchestrator | 2026-01-02 00:50:24.185521 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2026-01-02 00:50:24.185532 | orchestrator | Friday 02 January 2026 00:50:00 +0000 (0:00:00.933) 0:03:54.422 ******** 2026-01-02 00:50:24.185543 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-01-02 00:50:24.185554 | orchestrator | 2026-01-02 00:50:24.185565 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2026-01-02 00:50:24.185577 | orchestrator | Friday 02 January 2026 00:50:01 +0000 (0:00:01.313) 0:03:55.736 ******** 2026-01-02 00:50:24.185588 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-01-02 00:50:24.185595 | orchestrator | 2026-01-02 00:50:24.185601 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2026-01-02 00:50:24.185608 | orchestrator | Friday 02 January 2026 00:50:02 +0000 (0:00:01.016) 0:03:56.752 ******** 2026-01-02 00:50:24.185615 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:50:24.185621 | orchestrator | 2026-01-02 00:50:24.185628 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2026-01-02 00:50:24.185635 | orchestrator | Friday 02 January 2026 00:50:02 +0000 (0:00:00.116) 0:03:56.869 ******** 2026-01-02 00:50:24.185641 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2026-01-02 00:50:24.185648 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2026-01-02 00:50:24.185655 | orchestrator | 2026-01-02 00:50:24.185661 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2026-01-02 00:50:24.185668 | orchestrator | Friday 02 January 2026 00:50:04 +0000 (0:00:01.576) 0:03:58.446 ******** 2026-01-02 00:50:24.185674 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:50:24.185681 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:50:24.185687 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:50:24.185694 | orchestrator | 2026-01-02 00:50:24.185701 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2026-01-02 00:50:24.185707 | orchestrator | Friday 02 January 2026 00:50:04 +0000 (0:00:00.354) 0:03:58.800 ******** 2026-01-02 00:50:24.185714 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:50:24.185720 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:50:24.185727 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:50:24.185734 | orchestrator | 2026-01-02 00:50:24.185740 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2026-01-02 00:50:24.185747 | orchestrator | 2026-01-02 00:50:24.185753 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2026-01-02 00:50:24.185760 | orchestrator | Friday 02 January 2026 00:50:05 +0000 (0:00:01.045) 0:03:59.846 ******** 2026-01-02 00:50:24.185767 | orchestrator | ok: [testbed-manager] 2026-01-02 00:50:24.185773 | orchestrator | 2026-01-02 00:50:24.185780 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2026-01-02 00:50:24.185786 | orchestrator | Friday 02 January 2026 00:50:05 +0000 (0:00:00.118) 0:03:59.965 ******** 2026-01-02 00:50:24.185801 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2026-01-02 00:50:24.185807 | orchestrator | 2026-01-02 00:50:24.185814 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2026-01-02 00:50:24.185821 | orchestrator | Friday 02 January 2026 00:50:06 +0000 (0:00:00.203) 0:04:00.169 ******** 2026-01-02 00:50:24.185828 | orchestrator | changed: [testbed-manager] 2026-01-02 00:50:24.185839 | orchestrator | 2026-01-02 00:50:24.185851 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2026-01-02 00:50:24.185862 | orchestrator | 2026-01-02 00:50:24.185873 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2026-01-02 00:50:24.185883 | orchestrator | Friday 02 January 2026 00:50:10 +0000 (0:00:04.837) 0:04:05.006 ******** 2026-01-02 00:50:24.185894 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:50:24.186142 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:50:24.186155 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:50:24.186162 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:50:24.186169 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:50:24.186175 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:50:24.186182 | orchestrator | 2026-01-02 00:50:24.186189 | orchestrator | TASK [Manage labels] *********************************************************** 2026-01-02 00:50:24.186196 | orchestrator | Friday 02 January 2026 00:50:11 +0000 (0:00:00.770) 0:04:05.776 ******** 2026-01-02 00:50:24.186203 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-01-02 00:50:24.186209 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-01-02 00:50:24.186216 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-01-02 00:50:24.186223 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-01-02 00:50:24.186230 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-01-02 00:50:24.186236 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-01-02 00:50:24.186243 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-01-02 00:50:24.186249 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-01-02 00:50:24.186256 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-01-02 00:50:24.186263 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2026-01-02 00:50:24.186269 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2026-01-02 00:50:24.186282 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2026-01-02 00:50:24.186299 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-01-02 00:50:24.186306 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-01-02 00:50:24.186313 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-01-02 00:50:24.186319 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-01-02 00:50:24.186326 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-01-02 00:50:24.186332 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-01-02 00:50:24.186339 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-01-02 00:50:24.186346 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-01-02 00:50:24.186352 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-01-02 00:50:24.186359 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-01-02 00:50:24.186374 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-01-02 00:50:24.186380 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-01-02 00:50:24.186387 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-01-02 00:50:24.186394 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-01-02 00:50:24.186400 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-01-02 00:50:24.186407 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-01-02 00:50:24.186413 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-01-02 00:50:24.186420 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-01-02 00:50:24.186427 | orchestrator | 2026-01-02 00:50:24.186433 | orchestrator | TASK [Manage annotations] ****************************************************** 2026-01-02 00:50:24.186440 | orchestrator | Friday 02 January 2026 00:50:21 +0000 (0:00:09.542) 0:04:15.319 ******** 2026-01-02 00:50:24.186447 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:50:24.186454 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:50:24.186460 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:50:24.186467 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:50:24.186474 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:50:24.186480 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:50:24.186487 | orchestrator | 2026-01-02 00:50:24.186494 | orchestrator | TASK [Manage taints] *********************************************************** 2026-01-02 00:50:24.186500 | orchestrator | Friday 02 January 2026 00:50:21 +0000 (0:00:00.516) 0:04:15.835 ******** 2026-01-02 00:50:24.186507 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:50:24.186514 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:50:24.186520 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:50:24.186527 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:50:24.186533 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:50:24.186540 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:50:24.186546 | orchestrator | 2026-01-02 00:50:24.186553 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-02 00:50:24.186560 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-02 00:50:24.186567 | orchestrator | testbed-node-0 : ok=50  changed=23  unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-01-02 00:50:24.186574 | orchestrator | testbed-node-1 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-01-02 00:50:24.186581 | orchestrator | testbed-node-2 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-01-02 00:50:24.186588 | orchestrator | testbed-node-3 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-01-02 00:50:24.186595 | orchestrator | testbed-node-4 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-01-02 00:50:24.186601 | orchestrator | testbed-node-5 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-01-02 00:50:24.186608 | orchestrator | 2026-01-02 00:50:24.186614 | orchestrator | 2026-01-02 00:50:24.186621 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-02 00:50:24.186628 | orchestrator | Friday 02 January 2026 00:50:22 +0000 (0:00:00.345) 0:04:16.181 ******** 2026-01-02 00:50:24.186635 | orchestrator | =============================================================================== 2026-01-02 00:50:24.186645 | orchestrator | k3s_server_post : Wait for Cilium resources ---------------------------- 48.75s 2026-01-02 00:50:24.186652 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 43.58s 2026-01-02 00:50:24.186662 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 24.82s 2026-01-02 00:50:24.186673 | orchestrator | kubectl : Install required packages ------------------------------------ 10.84s 2026-01-02 00:50:24.186680 | orchestrator | k3s_agent : Manage k3s service ----------------------------------------- 10.62s 2026-01-02 00:50:24.186687 | orchestrator | Manage labels ----------------------------------------------------------- 9.54s 2026-01-02 00:50:24.186694 | orchestrator | kubectl : Add repository Debian ----------------------------------------- 7.07s 2026-01-02 00:50:24.186701 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 5.58s 2026-01-02 00:50:24.186707 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 4.84s 2026-01-02 00:50:24.186714 | orchestrator | k3s_server_post : Install Cilium ---------------------------------------- 4.65s 2026-01-02 00:50:24.186721 | orchestrator | k3s_server : Detect Kubernetes version for label compatibility ---------- 3.47s 2026-01-02 00:50:24.186727 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 3.07s 2026-01-02 00:50:24.186734 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 2.88s 2026-01-02 00:50:24.186741 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 2.28s 2026-01-02 00:50:24.186747 | orchestrator | k3s_download : Download k3s binary armhf -------------------------------- 2.21s 2026-01-02 00:50:24.186754 | orchestrator | k3s_server_post : Create tmp directory on first master ------------------ 1.82s 2026-01-02 00:50:24.186761 | orchestrator | k3s_agent : Check if system is PXE-booted ------------------------------- 1.76s 2026-01-02 00:50:24.186767 | orchestrator | k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers --- 1.68s 2026-01-02 00:50:24.186774 | orchestrator | k3s_server_post : Wait for connectivity to kube VIP --------------------- 1.61s 2026-01-02 00:50:24.186781 | orchestrator | k3s_server : Init cluster inside the transient k3s-init service --------- 1.61s 2026-01-02 00:50:24.186787 | orchestrator | 2026-01-02 00:50:24 | INFO  | Task c13d28fc-3d25-42db-b738-69edc45143d2 is in state STARTED 2026-01-02 00:50:24.186794 | orchestrator | 2026-01-02 00:50:24 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:50:24.186801 | orchestrator | 2026-01-02 00:50:24 | INFO  | Task b8f19085-f3f4-4e6f-976f-6e69bc40ef39 is in state STARTED 2026-01-02 00:50:24.187568 | orchestrator | 2026-01-02 00:50:24 | INFO  | Task ae7cb1e4-503c-476f-9517-35d3597afcab is in state STARTED 2026-01-02 00:50:24.189292 | orchestrator | 2026-01-02 00:50:24 | INFO  | Task a72f01af-241f-4d96-85d6-3f82360415c7 is in state STARTED 2026-01-02 00:50:24.190856 | orchestrator | 2026-01-02 00:50:24 | INFO  | Task 1ae719fc-56c0-4eda-8862-97961f46adbd is in state STARTED 2026-01-02 00:50:24.191233 | orchestrator | 2026-01-02 00:50:24 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:50:27.234126 | orchestrator | 2026-01-02 00:50:27 | INFO  | Task c13d28fc-3d25-42db-b738-69edc45143d2 is in state STARTED 2026-01-02 00:50:27.235498 | orchestrator | 2026-01-02 00:50:27 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:50:27.238510 | orchestrator | 2026-01-02 00:50:27 | INFO  | Task b8f19085-f3f4-4e6f-976f-6e69bc40ef39 is in state STARTED 2026-01-02 00:50:27.239129 | orchestrator | 2026-01-02 00:50:27 | INFO  | Task ae7cb1e4-503c-476f-9517-35d3597afcab is in state STARTED 2026-01-02 00:50:27.239880 | orchestrator | 2026-01-02 00:50:27 | INFO  | Task a72f01af-241f-4d96-85d6-3f82360415c7 is in state STARTED 2026-01-02 00:50:27.240590 | orchestrator | 2026-01-02 00:50:27 | INFO  | Task 1ae719fc-56c0-4eda-8862-97961f46adbd is in state STARTED 2026-01-02 00:50:27.240769 | orchestrator | 2026-01-02 00:50:27 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:50:30.265393 | orchestrator | 2026-01-02 00:50:30 | INFO  | Task c13d28fc-3d25-42db-b738-69edc45143d2 is in state SUCCESS 2026-01-02 00:50:30.266582 | orchestrator | 2026-01-02 00:50:30 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:50:30.267458 | orchestrator | 2026-01-02 00:50:30 | INFO  | Task b8f19085-f3f4-4e6f-976f-6e69bc40ef39 is in state STARTED 2026-01-02 00:50:30.268351 | orchestrator | 2026-01-02 00:50:30 | INFO  | Task ae7cb1e4-503c-476f-9517-35d3597afcab is in state STARTED 2026-01-02 00:50:30.269257 | orchestrator | 2026-01-02 00:50:30 | INFO  | Task a72f01af-241f-4d96-85d6-3f82360415c7 is in state STARTED 2026-01-02 00:50:30.270172 | orchestrator | 2026-01-02 00:50:30 | INFO  | Task 1ae719fc-56c0-4eda-8862-97961f46adbd is in state STARTED 2026-01-02 00:50:30.270483 | orchestrator | 2026-01-02 00:50:30 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:50:33.303206 | orchestrator | 2026-01-02 00:50:33 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:50:33.303392 | orchestrator | 2026-01-02 00:50:33 | INFO  | Task b8f19085-f3f4-4e6f-976f-6e69bc40ef39 is in state STARTED 2026-01-02 00:50:33.304237 | orchestrator | 2026-01-02 00:50:33 | INFO  | Task ae7cb1e4-503c-476f-9517-35d3597afcab is in state STARTED 2026-01-02 00:50:33.305030 | orchestrator | 2026-01-02 00:50:33 | INFO  | Task a72f01af-241f-4d96-85d6-3f82360415c7 is in state STARTED 2026-01-02 00:50:33.308937 | orchestrator | 2026-01-02 00:50:33 | INFO  | Task 1ae719fc-56c0-4eda-8862-97961f46adbd is in state STARTED 2026-01-02 00:50:33.308986 | orchestrator | 2026-01-02 00:50:33 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:50:36.333796 | orchestrator | 2026-01-02 00:50:36 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:50:36.334772 | orchestrator | 2026-01-02 00:50:36 | INFO  | Task b8f19085-f3f4-4e6f-976f-6e69bc40ef39 is in state STARTED 2026-01-02 00:50:36.336108 | orchestrator | 2026-01-02 00:50:36 | INFO  | Task ae7cb1e4-503c-476f-9517-35d3597afcab is in state STARTED 2026-01-02 00:50:36.337337 | orchestrator | 2026-01-02 00:50:36 | INFO  | Task a72f01af-241f-4d96-85d6-3f82360415c7 is in state STARTED 2026-01-02 00:50:36.338298 | orchestrator | 2026-01-02 00:50:36 | INFO  | Task 1ae719fc-56c0-4eda-8862-97961f46adbd is in state SUCCESS 2026-01-02 00:50:36.338355 | orchestrator | 2026-01-02 00:50:36 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:50:39.364526 | orchestrator | 2026-01-02 00:50:39 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:50:39.364613 | orchestrator | 2026-01-02 00:50:39 | INFO  | Task b8f19085-f3f4-4e6f-976f-6e69bc40ef39 is in state STARTED 2026-01-02 00:50:39.364631 | orchestrator | 2026-01-02 00:50:39 | INFO  | Task ae7cb1e4-503c-476f-9517-35d3597afcab is in state STARTED 2026-01-02 00:50:39.365287 | orchestrator | 2026-01-02 00:50:39 | INFO  | Task a72f01af-241f-4d96-85d6-3f82360415c7 is in state STARTED 2026-01-02 00:50:39.365316 | orchestrator | 2026-01-02 00:50:39 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:50:42.392474 | orchestrator | 2026-01-02 00:50:42 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:50:42.393085 | orchestrator | 2026-01-02 00:50:42 | INFO  | Task b8f19085-f3f4-4e6f-976f-6e69bc40ef39 is in state STARTED 2026-01-02 00:50:42.393875 | orchestrator | 2026-01-02 00:50:42 | INFO  | Task ae7cb1e4-503c-476f-9517-35d3597afcab is in state STARTED 2026-01-02 00:50:42.394645 | orchestrator | 2026-01-02 00:50:42 | INFO  | Task a72f01af-241f-4d96-85d6-3f82360415c7 is in state STARTED 2026-01-02 00:50:42.394670 | orchestrator | 2026-01-02 00:50:42 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:50:45.439993 | orchestrator | 2026-01-02 00:50:45 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:50:45.442086 | orchestrator | 2026-01-02 00:50:45 | INFO  | Task b8f19085-f3f4-4e6f-976f-6e69bc40ef39 is in state STARTED 2026-01-02 00:50:45.444558 | orchestrator | 2026-01-02 00:50:45 | INFO  | Task ae7cb1e4-503c-476f-9517-35d3597afcab is in state STARTED 2026-01-02 00:50:45.446134 | orchestrator | 2026-01-02 00:50:45 | INFO  | Task a72f01af-241f-4d96-85d6-3f82360415c7 is in state STARTED 2026-01-02 00:50:45.446435 | orchestrator | 2026-01-02 00:50:45 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:50:48.490368 | orchestrator | 2026-01-02 00:50:48 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:50:48.492568 | orchestrator | 2026-01-02 00:50:48 | INFO  | Task b8f19085-f3f4-4e6f-976f-6e69bc40ef39 is in state STARTED 2026-01-02 00:50:48.494702 | orchestrator | 2026-01-02 00:50:48 | INFO  | Task ae7cb1e4-503c-476f-9517-35d3597afcab is in state STARTED 2026-01-02 00:50:48.498697 | orchestrator | 2026-01-02 00:50:48 | INFO  | Task a72f01af-241f-4d96-85d6-3f82360415c7 is in state STARTED 2026-01-02 00:50:48.499272 | orchestrator | 2026-01-02 00:50:48 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:50:51.534775 | orchestrator | 2026-01-02 00:50:51 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:50:51.534874 | orchestrator | 2026-01-02 00:50:51 | INFO  | Task b8f19085-f3f4-4e6f-976f-6e69bc40ef39 is in state STARTED 2026-01-02 00:50:51.535631 | orchestrator | 2026-01-02 00:50:51 | INFO  | Task ae7cb1e4-503c-476f-9517-35d3597afcab is in state STARTED 2026-01-02 00:50:51.536513 | orchestrator | 2026-01-02 00:50:51 | INFO  | Task a72f01af-241f-4d96-85d6-3f82360415c7 is in state STARTED 2026-01-02 00:50:51.536560 | orchestrator | 2026-01-02 00:50:51 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:50:54.572543 | orchestrator | 2026-01-02 00:50:54 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:50:54.572819 | orchestrator | 2026-01-02 00:50:54 | INFO  | Task b8f19085-f3f4-4e6f-976f-6e69bc40ef39 is in state STARTED 2026-01-02 00:50:54.575407 | orchestrator | 2026-01-02 00:50:54 | INFO  | Task ae7cb1e4-503c-476f-9517-35d3597afcab is in state STARTED 2026-01-02 00:50:54.577769 | orchestrator | 2026-01-02 00:50:54 | INFO  | Task a72f01af-241f-4d96-85d6-3f82360415c7 is in state STARTED 2026-01-02 00:50:54.578228 | orchestrator | 2026-01-02 00:50:54 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:50:57.615448 | orchestrator | 2026-01-02 00:50:57 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:50:57.617622 | orchestrator | 2026-01-02 00:50:57 | INFO  | Task b8f19085-f3f4-4e6f-976f-6e69bc40ef39 is in state STARTED 2026-01-02 00:50:57.620022 | orchestrator | 2026-01-02 00:50:57 | INFO  | Task ae7cb1e4-503c-476f-9517-35d3597afcab is in state STARTED 2026-01-02 00:50:57.621631 | orchestrator | 2026-01-02 00:50:57 | INFO  | Task a72f01af-241f-4d96-85d6-3f82360415c7 is in state STARTED 2026-01-02 00:50:57.621699 | orchestrator | 2026-01-02 00:50:57 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:51:00.648200 | orchestrator | 2026-01-02 00:51:00 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:51:00.648356 | orchestrator | 2026-01-02 00:51:00 | INFO  | Task b8f19085-f3f4-4e6f-976f-6e69bc40ef39 is in state STARTED 2026-01-02 00:51:00.649363 | orchestrator | 2026-01-02 00:51:00 | INFO  | Task ae7cb1e4-503c-476f-9517-35d3597afcab is in state STARTED 2026-01-02 00:51:00.651111 | orchestrator | 2026-01-02 00:51:00 | INFO  | Task a72f01af-241f-4d96-85d6-3f82360415c7 is in state STARTED 2026-01-02 00:51:00.651130 | orchestrator | 2026-01-02 00:51:00 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:51:03.675335 | orchestrator | 2026-01-02 00:51:03 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:51:03.675550 | orchestrator | 2026-01-02 00:51:03 | INFO  | Task b8f19085-f3f4-4e6f-976f-6e69bc40ef39 is in state STARTED 2026-01-02 00:51:03.676082 | orchestrator | 2026-01-02 00:51:03 | INFO  | Task ae7cb1e4-503c-476f-9517-35d3597afcab is in state STARTED 2026-01-02 00:51:03.677322 | orchestrator | 2026-01-02 00:51:03 | INFO  | Task a72f01af-241f-4d96-85d6-3f82360415c7 is in state STARTED 2026-01-02 00:51:03.677350 | orchestrator | 2026-01-02 00:51:03 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:51:06.713326 | orchestrator | 2026-01-02 00:51:06 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:51:06.713431 | orchestrator | 2026-01-02 00:51:06 | INFO  | Task b8f19085-f3f4-4e6f-976f-6e69bc40ef39 is in state STARTED 2026-01-02 00:51:06.715194 | orchestrator | 2026-01-02 00:51:06 | INFO  | Task ae7cb1e4-503c-476f-9517-35d3597afcab is in state STARTED 2026-01-02 00:51:06.715500 | orchestrator | 2026-01-02 00:51:06 | INFO  | Task a72f01af-241f-4d96-85d6-3f82360415c7 is in state STARTED 2026-01-02 00:51:06.715536 | orchestrator | 2026-01-02 00:51:06 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:51:09.744259 | orchestrator | 2026-01-02 00:51:09 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:51:09.746471 | orchestrator | 2026-01-02 00:51:09 | INFO  | Task b8f19085-f3f4-4e6f-976f-6e69bc40ef39 is in state STARTED 2026-01-02 00:51:09.750113 | orchestrator | 2026-01-02 00:51:09 | INFO  | Task ae7cb1e4-503c-476f-9517-35d3597afcab is in state STARTED 2026-01-02 00:51:09.751777 | orchestrator | 2026-01-02 00:51:09 | INFO  | Task a72f01af-241f-4d96-85d6-3f82360415c7 is in state STARTED 2026-01-02 00:51:09.752028 | orchestrator | 2026-01-02 00:51:09 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:51:12.790339 | orchestrator | 2026-01-02 00:51:12 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:51:12.790462 | orchestrator | 2026-01-02 00:51:12 | INFO  | Task b8f19085-f3f4-4e6f-976f-6e69bc40ef39 is in state STARTED 2026-01-02 00:51:12.791182 | orchestrator | 2026-01-02 00:51:12 | INFO  | Task ae7cb1e4-503c-476f-9517-35d3597afcab is in state STARTED 2026-01-02 00:51:12.792061 | orchestrator | 2026-01-02 00:51:12 | INFO  | Task a72f01af-241f-4d96-85d6-3f82360415c7 is in state STARTED 2026-01-02 00:51:12.792100 | orchestrator | 2026-01-02 00:51:12 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:51:15.826312 | orchestrator | 2026-01-02 00:51:15 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:51:15.827951 | orchestrator | 2026-01-02 00:51:15 | INFO  | Task b8f19085-f3f4-4e6f-976f-6e69bc40ef39 is in state STARTED 2026-01-02 00:51:15.829513 | orchestrator | 2026-01-02 00:51:15 | INFO  | Task ae7cb1e4-503c-476f-9517-35d3597afcab is in state STARTED 2026-01-02 00:51:15.831315 | orchestrator | 2026-01-02 00:51:15 | INFO  | Task a72f01af-241f-4d96-85d6-3f82360415c7 is in state STARTED 2026-01-02 00:51:15.831401 | orchestrator | 2026-01-02 00:51:15 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:51:18.859234 | orchestrator | 2026-01-02 00:51:18 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:51:18.860740 | orchestrator | 2026-01-02 00:51:18 | INFO  | Task b8f19085-f3f4-4e6f-976f-6e69bc40ef39 is in state STARTED 2026-01-02 00:51:18.861671 | orchestrator | 2026-01-02 00:51:18 | INFO  | Task ae7cb1e4-503c-476f-9517-35d3597afcab is in state STARTED 2026-01-02 00:51:18.862723 | orchestrator | 2026-01-02 00:51:18 | INFO  | Task a72f01af-241f-4d96-85d6-3f82360415c7 is in state STARTED 2026-01-02 00:51:18.863441 | orchestrator | 2026-01-02 00:51:18 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:51:21.937390 | orchestrator | 2026-01-02 00:51:21 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:51:21.937650 | orchestrator | 2026-01-02 00:51:21 | INFO  | Task b8f19085-f3f4-4e6f-976f-6e69bc40ef39 is in state STARTED 2026-01-02 00:51:21.940184 | orchestrator | 2026-01-02 00:51:21 | INFO  | Task ae7cb1e4-503c-476f-9517-35d3597afcab is in state STARTED 2026-01-02 00:51:21.940914 | orchestrator | 2026-01-02 00:51:21 | INFO  | Task a72f01af-241f-4d96-85d6-3f82360415c7 is in state STARTED 2026-01-02 00:51:21.940973 | orchestrator | 2026-01-02 00:51:21 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:51:24.981505 | orchestrator | 2026-01-02 00:51:24 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:51:24.981627 | orchestrator | 2026-01-02 00:51:24 | INFO  | Task b8f19085-f3f4-4e6f-976f-6e69bc40ef39 is in state STARTED 2026-01-02 00:51:24.981638 | orchestrator | 2026-01-02 00:51:24 | INFO  | Task ae7cb1e4-503c-476f-9517-35d3597afcab is in state STARTED 2026-01-02 00:51:24.981646 | orchestrator | 2026-01-02 00:51:24 | INFO  | Task a72f01af-241f-4d96-85d6-3f82360415c7 is in state STARTED 2026-01-02 00:51:24.981653 | orchestrator | 2026-01-02 00:51:24 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:51:28.000369 | orchestrator | 2026-01-02 00:51:27 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:51:28.001462 | orchestrator | 2026-01-02 00:51:27 | INFO  | Task b8f19085-f3f4-4e6f-976f-6e69bc40ef39 is in state STARTED 2026-01-02 00:51:28.001546 | orchestrator | 2026-01-02 00:51:28 | INFO  | Task ae7cb1e4-503c-476f-9517-35d3597afcab is in state STARTED 2026-01-02 00:51:28.002212 | orchestrator | 2026-01-02 00:51:28 | INFO  | Task a72f01af-241f-4d96-85d6-3f82360415c7 is in state STARTED 2026-01-02 00:51:28.002887 | orchestrator | 2026-01-02 00:51:28 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:51:31.029544 | orchestrator | 2026-01-02 00:51:31 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:51:31.029836 | orchestrator | 2026-01-02 00:51:31 | INFO  | Task b8f19085-f3f4-4e6f-976f-6e69bc40ef39 is in state STARTED 2026-01-02 00:51:31.030957 | orchestrator | 2026-01-02 00:51:31 | INFO  | Task ae7cb1e4-503c-476f-9517-35d3597afcab is in state STARTED 2026-01-02 00:51:31.031511 | orchestrator | 2026-01-02 00:51:31 | INFO  | Task a72f01af-241f-4d96-85d6-3f82360415c7 is in state STARTED 2026-01-02 00:51:31.031601 | orchestrator | 2026-01-02 00:51:31 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:51:34.071762 | orchestrator | 2026-01-02 00:51:34 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:51:34.072060 | orchestrator | 2026-01-02 00:51:34 | INFO  | Task b8f19085-f3f4-4e6f-976f-6e69bc40ef39 is in state STARTED 2026-01-02 00:51:34.075424 | orchestrator | 2026-01-02 00:51:34 | INFO  | Task ae7cb1e4-503c-476f-9517-35d3597afcab is in state STARTED 2026-01-02 00:51:34.076127 | orchestrator | 2026-01-02 00:51:34 | INFO  | Task a72f01af-241f-4d96-85d6-3f82360415c7 is in state STARTED 2026-01-02 00:51:34.076148 | orchestrator | 2026-01-02 00:51:34 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:51:37.098522 | orchestrator | 2026-01-02 00:51:37 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:51:37.098671 | orchestrator | 2026-01-02 00:51:37 | INFO  | Task b8f19085-f3f4-4e6f-976f-6e69bc40ef39 is in state STARTED 2026-01-02 00:51:37.099318 | orchestrator | 2026-01-02 00:51:37 | INFO  | Task ae7cb1e4-503c-476f-9517-35d3597afcab is in state STARTED 2026-01-02 00:51:37.100628 | orchestrator | 2026-01-02 00:51:37 | INFO  | Task a72f01af-241f-4d96-85d6-3f82360415c7 is in state STARTED 2026-01-02 00:51:37.100656 | orchestrator | 2026-01-02 00:51:37 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:51:40.130574 | orchestrator | 2026-01-02 00:51:40 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:51:40.132288 | orchestrator | 2026-01-02 00:51:40 | INFO  | Task b8f19085-f3f4-4e6f-976f-6e69bc40ef39 is in state STARTED 2026-01-02 00:51:40.134158 | orchestrator | 2026-01-02 00:51:40 | INFO  | Task ae7cb1e4-503c-476f-9517-35d3597afcab is in state STARTED 2026-01-02 00:51:40.136280 | orchestrator | 2026-01-02 00:51:40 | INFO  | Task a72f01af-241f-4d96-85d6-3f82360415c7 is in state STARTED 2026-01-02 00:51:40.136798 | orchestrator | 2026-01-02 00:51:40 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:51:43.174399 | orchestrator | 2026-01-02 00:51:43 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:51:43.175885 | orchestrator | 2026-01-02 00:51:43 | INFO  | Task b8f19085-f3f4-4e6f-976f-6e69bc40ef39 is in state STARTED 2026-01-02 00:51:43.177162 | orchestrator | 2026-01-02 00:51:43 | INFO  | Task ae7cb1e4-503c-476f-9517-35d3597afcab is in state STARTED 2026-01-02 00:51:43.178963 | orchestrator | 2026-01-02 00:51:43 | INFO  | Task a72f01af-241f-4d96-85d6-3f82360415c7 is in state STARTED 2026-01-02 00:51:43.179010 | orchestrator | 2026-01-02 00:51:43 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:51:46.218835 | orchestrator | 2026-01-02 00:51:46 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:51:46.220338 | orchestrator | 2026-01-02 00:51:46 | INFO  | Task b8f19085-f3f4-4e6f-976f-6e69bc40ef39 is in state STARTED 2026-01-02 00:51:46.222292 | orchestrator | 2026-01-02 00:51:46 | INFO  | Task ae7cb1e4-503c-476f-9517-35d3597afcab is in state STARTED 2026-01-02 00:51:46.223497 | orchestrator | 2026-01-02 00:51:46 | INFO  | Task a72f01af-241f-4d96-85d6-3f82360415c7 is in state STARTED 2026-01-02 00:51:46.223563 | orchestrator | 2026-01-02 00:51:46 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:51:49.258557 | orchestrator | 2026-01-02 00:51:49 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:51:49.259229 | orchestrator | 2026-01-02 00:51:49 | INFO  | Task b8f19085-f3f4-4e6f-976f-6e69bc40ef39 is in state STARTED 2026-01-02 00:51:49.260095 | orchestrator | 2026-01-02 00:51:49 | INFO  | Task ae7cb1e4-503c-476f-9517-35d3597afcab is in state STARTED 2026-01-02 00:51:49.261283 | orchestrator | 2026-01-02 00:51:49 | INFO  | Task a72f01af-241f-4d96-85d6-3f82360415c7 is in state STARTED 2026-01-02 00:51:49.261328 | orchestrator | 2026-01-02 00:51:49 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:51:52.292347 | orchestrator | 2026-01-02 00:51:52 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:51:52.292459 | orchestrator | 2026-01-02 00:51:52 | INFO  | Task b8f19085-f3f4-4e6f-976f-6e69bc40ef39 is in state STARTED 2026-01-02 00:51:52.292938 | orchestrator | 2026-01-02 00:51:52 | INFO  | Task ae7cb1e4-503c-476f-9517-35d3597afcab is in state STARTED 2026-01-02 00:51:52.293780 | orchestrator | 2026-01-02 00:51:52 | INFO  | Task a72f01af-241f-4d96-85d6-3f82360415c7 is in state STARTED 2026-01-02 00:51:52.293810 | orchestrator | 2026-01-02 00:51:52 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:51:55.320881 | orchestrator | 2026-01-02 00:51:55 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:51:55.322524 | orchestrator | 2026-01-02 00:51:55 | INFO  | Task b8f19085-f3f4-4e6f-976f-6e69bc40ef39 is in state STARTED 2026-01-02 00:51:55.325253 | orchestrator | 2026-01-02 00:51:55 | INFO  | Task ae7cb1e4-503c-476f-9517-35d3597afcab is in state STARTED 2026-01-02 00:51:55.325923 | orchestrator | 2026-01-02 00:51:55 | INFO  | Task a72f01af-241f-4d96-85d6-3f82360415c7 is in state STARTED 2026-01-02 00:51:55.325949 | orchestrator | 2026-01-02 00:51:55 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:51:58.351282 | orchestrator | 2026-01-02 00:51:58 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:51:58.351537 | orchestrator | 2026-01-02 00:51:58 | INFO  | Task b8f19085-f3f4-4e6f-976f-6e69bc40ef39 is in state STARTED 2026-01-02 00:51:58.352255 | orchestrator | 2026-01-02 00:51:58 | INFO  | Task ae7cb1e4-503c-476f-9517-35d3597afcab is in state STARTED 2026-01-02 00:51:58.352713 | orchestrator | 2026-01-02 00:51:58 | INFO  | Task a72f01af-241f-4d96-85d6-3f82360415c7 is in state STARTED 2026-01-02 00:51:58.352723 | orchestrator | 2026-01-02 00:51:58 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:52:01.376775 | orchestrator | 2026-01-02 00:52:01 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:52:01.377571 | orchestrator | 2026-01-02 00:52:01 | INFO  | Task b8f19085-f3f4-4e6f-976f-6e69bc40ef39 is in state SUCCESS 2026-01-02 00:52:01.379277 | orchestrator | 2026-01-02 00:52:01.379326 | orchestrator | 2026-01-02 00:52:01.379339 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2026-01-02 00:52:01.379346 | orchestrator | 2026-01-02 00:52:01.379351 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-01-02 00:52:01.379356 | orchestrator | Friday 02 January 2026 00:50:26 +0000 (0:00:00.457) 0:00:00.457 ******** 2026-01-02 00:52:01.379362 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-01-02 00:52:01.379366 | orchestrator | 2026-01-02 00:52:01.379371 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-01-02 00:52:01.379376 | orchestrator | Friday 02 January 2026 00:50:27 +0000 (0:00:00.970) 0:00:01.427 ******** 2026-01-02 00:52:01.379380 | orchestrator | changed: [testbed-manager] 2026-01-02 00:52:01.379385 | orchestrator | 2026-01-02 00:52:01.379390 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2026-01-02 00:52:01.379394 | orchestrator | Friday 02 January 2026 00:50:28 +0000 (0:00:01.127) 0:00:02.554 ******** 2026-01-02 00:52:01.379399 | orchestrator | changed: [testbed-manager] 2026-01-02 00:52:01.379403 | orchestrator | 2026-01-02 00:52:01.379408 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-02 00:52:01.379412 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-02 00:52:01.379419 | orchestrator | 2026-01-02 00:52:01.379423 | orchestrator | 2026-01-02 00:52:01.379428 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-02 00:52:01.379432 | orchestrator | Friday 02 January 2026 00:50:28 +0000 (0:00:00.386) 0:00:02.941 ******** 2026-01-02 00:52:01.379437 | orchestrator | =============================================================================== 2026-01-02 00:52:01.379456 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.13s 2026-01-02 00:52:01.379461 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.97s 2026-01-02 00:52:01.379465 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.39s 2026-01-02 00:52:01.379470 | orchestrator | 2026-01-02 00:52:01.379474 | orchestrator | 2026-01-02 00:52:01.379478 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-01-02 00:52:01.379483 | orchestrator | 2026-01-02 00:52:01.379544 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-01-02 00:52:01.379550 | orchestrator | Friday 02 January 2026 00:50:26 +0000 (0:00:00.144) 0:00:00.144 ******** 2026-01-02 00:52:01.379554 | orchestrator | ok: [testbed-manager] 2026-01-02 00:52:01.379560 | orchestrator | 2026-01-02 00:52:01.379565 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-01-02 00:52:01.379570 | orchestrator | Friday 02 January 2026 00:50:26 +0000 (0:00:00.676) 0:00:00.820 ******** 2026-01-02 00:52:01.379574 | orchestrator | ok: [testbed-manager] 2026-01-02 00:52:01.379579 | orchestrator | 2026-01-02 00:52:01.379583 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-01-02 00:52:01.379588 | orchestrator | Friday 02 January 2026 00:50:27 +0000 (0:00:00.696) 0:00:01.516 ******** 2026-01-02 00:52:01.379595 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-01-02 00:52:01.379602 | orchestrator | 2026-01-02 00:52:01.379609 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-01-02 00:52:01.379616 | orchestrator | Friday 02 January 2026 00:50:28 +0000 (0:00:00.740) 0:00:02.257 ******** 2026-01-02 00:52:01.379624 | orchestrator | changed: [testbed-manager] 2026-01-02 00:52:01.379631 | orchestrator | 2026-01-02 00:52:01.379639 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-01-02 00:52:01.379650 | orchestrator | Friday 02 January 2026 00:50:29 +0000 (0:00:01.500) 0:00:03.757 ******** 2026-01-02 00:52:01.379657 | orchestrator | changed: [testbed-manager] 2026-01-02 00:52:01.379665 | orchestrator | 2026-01-02 00:52:01.379672 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-01-02 00:52:01.379680 | orchestrator | Friday 02 January 2026 00:50:30 +0000 (0:00:00.488) 0:00:04.246 ******** 2026-01-02 00:52:01.379688 | orchestrator | changed: [testbed-manager -> localhost] 2026-01-02 00:52:01.379696 | orchestrator | 2026-01-02 00:52:01.379713 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-01-02 00:52:01.379720 | orchestrator | Friday 02 January 2026 00:50:31 +0000 (0:00:01.547) 0:00:05.793 ******** 2026-01-02 00:52:01.379727 | orchestrator | changed: [testbed-manager -> localhost] 2026-01-02 00:52:01.379731 | orchestrator | 2026-01-02 00:52:01.379736 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-01-02 00:52:01.379740 | orchestrator | Friday 02 January 2026 00:50:32 +0000 (0:00:00.718) 0:00:06.511 ******** 2026-01-02 00:52:01.379745 | orchestrator | ok: [testbed-manager] 2026-01-02 00:52:01.379749 | orchestrator | 2026-01-02 00:52:01.379753 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-01-02 00:52:01.379758 | orchestrator | Friday 02 January 2026 00:50:33 +0000 (0:00:00.322) 0:00:06.833 ******** 2026-01-02 00:52:01.379762 | orchestrator | ok: [testbed-manager] 2026-01-02 00:52:01.379767 | orchestrator | 2026-01-02 00:52:01.379771 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-02 00:52:01.379782 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-02 00:52:01.379787 | orchestrator | 2026-01-02 00:52:01.379791 | orchestrator | 2026-01-02 00:52:01.379796 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-02 00:52:01.379800 | orchestrator | Friday 02 January 2026 00:50:33 +0000 (0:00:00.268) 0:00:07.102 ******** 2026-01-02 00:52:01.379805 | orchestrator | =============================================================================== 2026-01-02 00:52:01.379815 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.55s 2026-01-02 00:52:01.379819 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.50s 2026-01-02 00:52:01.379831 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.74s 2026-01-02 00:52:01.379865 | orchestrator | Change server address in the kubeconfig inside the manager service ------ 0.72s 2026-01-02 00:52:01.379871 | orchestrator | Create .kube directory -------------------------------------------------- 0.70s 2026-01-02 00:52:01.379875 | orchestrator | Get home directory of operator user ------------------------------------- 0.68s 2026-01-02 00:52:01.379879 | orchestrator | Change server address in the kubeconfig --------------------------------- 0.49s 2026-01-02 00:52:01.379884 | orchestrator | Set KUBECONFIG environment variable ------------------------------------- 0.32s 2026-01-02 00:52:01.379888 | orchestrator | Enable kubectl command line completion ---------------------------------- 0.27s 2026-01-02 00:52:01.379892 | orchestrator | 2026-01-02 00:52:01.379897 | orchestrator | 2026-01-02 00:52:01.379901 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2026-01-02 00:52:01.379906 | orchestrator | 2026-01-02 00:52:01.379910 | orchestrator | TASK [Inform the user about the following task] ******************************** 2026-01-02 00:52:01.379914 | orchestrator | Friday 02 January 2026 00:48:44 +0000 (0:00:00.102) 0:00:00.102 ******** 2026-01-02 00:52:01.379919 | orchestrator | ok: [localhost] => { 2026-01-02 00:52:01.379925 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2026-01-02 00:52:01.379929 | orchestrator | } 2026-01-02 00:52:01.379934 | orchestrator | 2026-01-02 00:52:01.379938 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2026-01-02 00:52:01.379943 | orchestrator | Friday 02 January 2026 00:48:44 +0000 (0:00:00.079) 0:00:00.181 ******** 2026-01-02 00:52:01.379948 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2026-01-02 00:52:01.379954 | orchestrator | ...ignoring 2026-01-02 00:52:01.379959 | orchestrator | 2026-01-02 00:52:01.379963 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2026-01-02 00:52:01.379967 | orchestrator | Friday 02 January 2026 00:48:47 +0000 (0:00:03.021) 0:00:03.203 ******** 2026-01-02 00:52:01.379972 | orchestrator | skipping: [localhost] 2026-01-02 00:52:01.379976 | orchestrator | 2026-01-02 00:52:01.379980 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2026-01-02 00:52:01.379985 | orchestrator | Friday 02 January 2026 00:48:47 +0000 (0:00:00.152) 0:00:03.355 ******** 2026-01-02 00:52:01.379989 | orchestrator | ok: [localhost] 2026-01-02 00:52:01.379994 | orchestrator | 2026-01-02 00:52:01.379998 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-02 00:52:01.380002 | orchestrator | 2026-01-02 00:52:01.380007 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-02 00:52:01.380011 | orchestrator | Friday 02 January 2026 00:48:47 +0000 (0:00:00.384) 0:00:03.740 ******** 2026-01-02 00:52:01.380015 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:52:01.380020 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:52:01.380024 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:52:01.380028 | orchestrator | 2026-01-02 00:52:01.380033 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-02 00:52:01.380037 | orchestrator | Friday 02 January 2026 00:48:48 +0000 (0:00:00.623) 0:00:04.364 ******** 2026-01-02 00:52:01.380041 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2026-01-02 00:52:01.380046 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2026-01-02 00:52:01.380051 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2026-01-02 00:52:01.380055 | orchestrator | 2026-01-02 00:52:01.380059 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2026-01-02 00:52:01.380064 | orchestrator | 2026-01-02 00:52:01.380072 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-01-02 00:52:01.380076 | orchestrator | Friday 02 January 2026 00:48:49 +0000 (0:00:00.780) 0:00:05.144 ******** 2026-01-02 00:52:01.380081 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:52:01.380086 | orchestrator | 2026-01-02 00:52:01.380090 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-01-02 00:52:01.380094 | orchestrator | Friday 02 January 2026 00:48:51 +0000 (0:00:01.864) 0:00:07.009 ******** 2026-01-02 00:52:01.380099 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:52:01.380103 | orchestrator | 2026-01-02 00:52:01.380107 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2026-01-02 00:52:01.380112 | orchestrator | Friday 02 January 2026 00:48:52 +0000 (0:00:01.285) 0:00:08.295 ******** 2026-01-02 00:52:01.380116 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:52:01.380120 | orchestrator | 2026-01-02 00:52:01.380125 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2026-01-02 00:52:01.380129 | orchestrator | Friday 02 January 2026 00:48:52 +0000 (0:00:00.227) 0:00:08.522 ******** 2026-01-02 00:52:01.380133 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:52:01.380140 | orchestrator | 2026-01-02 00:52:01.380147 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2026-01-02 00:52:01.380154 | orchestrator | Friday 02 January 2026 00:48:52 +0000 (0:00:00.304) 0:00:08.827 ******** 2026-01-02 00:52:01.380164 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:52:01.380171 | orchestrator | 2026-01-02 00:52:01.380178 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2026-01-02 00:52:01.380185 | orchestrator | Friday 02 January 2026 00:48:53 +0000 (0:00:00.258) 0:00:09.085 ******** 2026-01-02 00:52:01.380192 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:52:01.380199 | orchestrator | 2026-01-02 00:52:01.380206 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-01-02 00:52:01.380214 | orchestrator | Friday 02 January 2026 00:48:53 +0000 (0:00:00.431) 0:00:09.517 ******** 2026-01-02 00:52:01.380221 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:52:01.380228 | orchestrator | 2026-01-02 00:52:01.380235 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-01-02 00:52:01.380248 | orchestrator | Friday 02 January 2026 00:48:54 +0000 (0:00:00.456) 0:00:09.974 ******** 2026-01-02 00:52:01.380256 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:52:01.380263 | orchestrator | 2026-01-02 00:52:01.380270 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2026-01-02 00:52:01.380279 | orchestrator | Friday 02 January 2026 00:48:54 +0000 (0:00:00.818) 0:00:10.792 ******** 2026-01-02 00:52:01.380287 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:52:01.380293 | orchestrator | 2026-01-02 00:52:01.380300 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2026-01-02 00:52:01.380308 | orchestrator | Friday 02 January 2026 00:48:55 +0000 (0:00:00.232) 0:00:11.024 ******** 2026-01-02 00:52:01.380316 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:52:01.380325 | orchestrator | 2026-01-02 00:52:01.380333 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2026-01-02 00:52:01.380340 | orchestrator | Friday 02 January 2026 00:48:55 +0000 (0:00:00.239) 0:00:11.264 ******** 2026-01-02 00:52:01.380353 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-02 00:52:01.380370 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-02 00:52:01.380384 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-02 00:52:01.380392 | orchestrator | 2026-01-02 00:52:01.380399 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2026-01-02 00:52:01.380407 | orchestrator | Friday 02 January 2026 00:48:56 +0000 (0:00:00.801) 0:00:12.065 ******** 2026-01-02 00:52:01.380433 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-02 00:52:01.380443 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-02 00:52:01.380455 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-02 00:52:01.380465 | orchestrator | 2026-01-02 00:52:01.380473 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2026-01-02 00:52:01.380481 | orchestrator | Friday 02 January 2026 00:48:57 +0000 (0:00:01.588) 0:00:13.654 ******** 2026-01-02 00:52:01.380489 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-01-02 00:52:01.380497 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-01-02 00:52:01.380508 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-01-02 00:52:01.380517 | orchestrator | 2026-01-02 00:52:01.380525 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2026-01-02 00:52:01.380532 | orchestrator | Friday 02 January 2026 00:48:59 +0000 (0:00:02.073) 0:00:15.727 ******** 2026-01-02 00:52:01.380539 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-01-02 00:52:01.380546 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-01-02 00:52:01.380553 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-01-02 00:52:01.380562 | orchestrator | 2026-01-02 00:52:01.380570 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2026-01-02 00:52:01.380582 | orchestrator | Friday 02 January 2026 00:49:02 +0000 (0:00:02.802) 0:00:18.532 ******** 2026-01-02 00:52:01.380669 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-01-02 00:52:01.380679 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-01-02 00:52:01.380686 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-01-02 00:52:01.380694 | orchestrator | 2026-01-02 00:52:01.380701 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2026-01-02 00:52:01.380715 | orchestrator | Friday 02 January 2026 00:49:05 +0000 (0:00:02.377) 0:00:20.909 ******** 2026-01-02 00:52:01.380722 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-01-02 00:52:01.380729 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-01-02 00:52:01.380745 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-01-02 00:52:01.380753 | orchestrator | 2026-01-02 00:52:01.380761 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2026-01-02 00:52:01.380770 | orchestrator | Friday 02 January 2026 00:49:06 +0000 (0:00:01.809) 0:00:22.718 ******** 2026-01-02 00:52:01.380778 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-01-02 00:52:01.380786 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-01-02 00:52:01.380793 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-01-02 00:52:01.380801 | orchestrator | 2026-01-02 00:52:01.380811 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2026-01-02 00:52:01.380818 | orchestrator | Friday 02 January 2026 00:49:08 +0000 (0:00:02.071) 0:00:24.790 ******** 2026-01-02 00:52:01.380826 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-01-02 00:52:01.380833 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-01-02 00:52:01.380840 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-01-02 00:52:01.380862 | orchestrator | 2026-01-02 00:52:01.380869 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-01-02 00:52:01.380875 | orchestrator | Friday 02 January 2026 00:49:10 +0000 (0:00:01.371) 0:00:26.161 ******** 2026-01-02 00:52:01.380893 | orchestrator | included: /ansible/roles/rabbitmq/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:52:01.380900 | orchestrator | 2026-01-02 00:52:01.380907 | orchestrator | TASK [service-cert-copy : rabbitmq | Copying over extra CA certificates] ******* 2026-01-02 00:52:01.380915 | orchestrator | Friday 02 January 2026 00:49:10 +0000 (0:00:00.577) 0:00:26.739 ******** 2026-01-02 00:52:01.380924 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-02 00:52:01.380946 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-02 00:52:01.380961 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-02 00:52:01.380969 | orchestrator | 2026-01-02 00:52:01.380975 | orchestrator | TASK [service-cert-copy : rabbitmq | Copying over backend internal TLS certificate] *** 2026-01-02 00:52:01.380979 | orchestrator | Friday 02 January 2026 00:49:12 +0000 (0:00:01.327) 0:00:28.066 ******** 2026-01-02 00:52:01.380984 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-01-02 00:52:01.380989 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-01-02 00:52:01.380994 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:52:01.381003 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:52:01.381012 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-01-02 00:52:01.381018 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:52:01.381022 | orchestrator | 2026-01-02 00:52:01.381026 | orchestrator | TASK [service-cert-copy : rabbitmq | Copying over backend internal TLS key] **** 2026-01-02 00:52:01.381031 | orchestrator | Friday 02 January 2026 00:49:12 +0000 (0:00:00.749) 0:00:28.815 ******** 2026-01-02 00:52:01.381036 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-01-02 00:52:01.381041 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:52:01.381045 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-01-02 00:52:01.381050 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:52:01.381077 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-01-02 00:52:01.381086 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:52:01.381091 | orchestrator | 2026-01-02 00:52:01.381095 | orchestrator | TASK [service-check-containers : rabbitmq | Check containers] ****************** 2026-01-02 00:52:01.381103 | orchestrator | Friday 02 January 2026 00:49:13 +0000 (0:00:00.736) 0:00:29.552 ******** 2026-01-02 00:52:01.381108 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-02 00:52:01.381113 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-02 00:52:01.381118 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-02 00:52:01.381127 | orchestrator | 2026-01-02 00:52:01.381132 | orchestrator | TASK [service-check-containers : rabbitmq | Notify handlers to restart containers] *** 2026-01-02 00:52:01.381136 | orchestrator | Friday 02 January 2026 00:49:14 +0000 (0:00:01.079) 0:00:30.631 ******** 2026-01-02 00:52:01.381149 | orchestrator | changed: [testbed-node-0] => { 2026-01-02 00:52:01.381157 | orchestrator |  "msg": "Notifying handlers" 2026-01-02 00:52:01.381162 | orchestrator | } 2026-01-02 00:52:01.381166 | orchestrator | changed: [testbed-node-1] => { 2026-01-02 00:52:01.381171 | orchestrator |  "msg": "Notifying handlers" 2026-01-02 00:52:01.381175 | orchestrator | } 2026-01-02 00:52:01.381179 | orchestrator | changed: [testbed-node-2] => { 2026-01-02 00:52:01.381184 | orchestrator |  "msg": "Notifying handlers" 2026-01-02 00:52:01.381188 | orchestrator | } 2026-01-02 00:52:01.381192 | orchestrator | 2026-01-02 00:52:01.381197 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-01-02 00:52:01.381201 | orchestrator | Friday 02 January 2026 00:49:15 +0000 (0:00:00.490) 0:00:31.122 ******** 2026-01-02 00:52:01.381212 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-01-02 00:52:01.381223 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:52:01.381228 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-01-02 00:52:01.381233 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:52:01.381238 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-01-02 00:52:01.381246 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:52:01.381251 | orchestrator | 2026-01-02 00:52:01.381256 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2026-01-02 00:52:01.381260 | orchestrator | Friday 02 January 2026 00:49:15 +0000 (0:00:00.646) 0:00:31.768 ******** 2026-01-02 00:52:01.381264 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:52:01.381269 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:52:01.381273 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:52:01.381277 | orchestrator | 2026-01-02 00:52:01.381282 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2026-01-02 00:52:01.381289 | orchestrator | Friday 02 January 2026 00:49:17 +0000 (0:00:01.135) 0:00:32.903 ******** 2026-01-02 00:52:01.381293 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:52:01.381298 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:52:01.381302 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:52:01.381306 | orchestrator | 2026-01-02 00:52:01.381311 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2026-01-02 00:52:01.381315 | orchestrator | Friday 02 January 2026 00:49:27 +0000 (0:00:10.886) 0:00:43.790 ******** 2026-01-02 00:52:01.381320 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:52:01.381324 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:52:01.381328 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:52:01.381333 | orchestrator | 2026-01-02 00:52:01.381337 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-01-02 00:52:01.381342 | orchestrator | 2026-01-02 00:52:01.381346 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-01-02 00:52:01.381354 | orchestrator | Friday 02 January 2026 00:49:29 +0000 (0:00:01.246) 0:00:45.037 ******** 2026-01-02 00:52:01.381358 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:52:01.381363 | orchestrator | 2026-01-02 00:52:01.381367 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-01-02 00:52:01.381372 | orchestrator | Friday 02 January 2026 00:49:31 +0000 (0:00:01.948) 0:00:46.985 ******** 2026-01-02 00:52:01.381377 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:52:01.381384 | orchestrator | 2026-01-02 00:52:01.381390 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-01-02 00:52:01.381397 | orchestrator | Friday 02 January 2026 00:49:31 +0000 (0:00:00.140) 0:00:47.125 ******** 2026-01-02 00:52:01.381404 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:52:01.381411 | orchestrator | 2026-01-02 00:52:01.381419 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-01-02 00:52:01.381427 | orchestrator | Friday 02 January 2026 00:49:32 +0000 (0:00:01.523) 0:00:48.649 ******** 2026-01-02 00:52:01.381434 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:52:01.381441 | orchestrator | 2026-01-02 00:52:01.381448 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-01-02 00:52:01.381455 | orchestrator | 2026-01-02 00:52:01.381462 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-01-02 00:52:01.381469 | orchestrator | Friday 02 January 2026 00:51:27 +0000 (0:01:54.450) 0:02:43.099 ******** 2026-01-02 00:52:01.381476 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:52:01.381485 | orchestrator | 2026-01-02 00:52:01.381492 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-01-02 00:52:01.381499 | orchestrator | Friday 02 January 2026 00:51:27 +0000 (0:00:00.695) 0:02:43.794 ******** 2026-01-02 00:52:01.381513 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:52:01.381521 | orchestrator | 2026-01-02 00:52:01.381528 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-01-02 00:52:01.381536 | orchestrator | Friday 02 January 2026 00:51:28 +0000 (0:00:00.097) 0:02:43.892 ******** 2026-01-02 00:52:01.381542 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:52:01.381546 | orchestrator | 2026-01-02 00:52:01.381551 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-01-02 00:52:01.381558 | orchestrator | Friday 02 January 2026 00:51:29 +0000 (0:00:01.732) 0:02:45.624 ******** 2026-01-02 00:52:01.381565 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:52:01.381572 | orchestrator | 2026-01-02 00:52:01.381579 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-01-02 00:52:01.381588 | orchestrator | 2026-01-02 00:52:01.381597 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-01-02 00:52:01.381603 | orchestrator | Friday 02 January 2026 00:51:40 +0000 (0:00:10.814) 0:02:56.439 ******** 2026-01-02 00:52:01.381610 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:52:01.381617 | orchestrator | 2026-01-02 00:52:01.381626 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-01-02 00:52:01.381634 | orchestrator | Friday 02 January 2026 00:51:41 +0000 (0:00:00.766) 0:02:57.206 ******** 2026-01-02 00:52:01.381642 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:52:01.381650 | orchestrator | 2026-01-02 00:52:01.381667 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-01-02 00:52:01.381676 | orchestrator | Friday 02 January 2026 00:51:41 +0000 (0:00:00.098) 0:02:57.304 ******** 2026-01-02 00:52:01.381683 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:52:01.381690 | orchestrator | 2026-01-02 00:52:01.381771 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-01-02 00:52:01.381790 | orchestrator | Friday 02 January 2026 00:51:42 +0000 (0:00:01.504) 0:02:58.809 ******** 2026-01-02 00:52:01.381798 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:52:01.381806 | orchestrator | 2026-01-02 00:52:01.381813 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2026-01-02 00:52:01.381822 | orchestrator | 2026-01-02 00:52:01.381832 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2026-01-02 00:52:01.381840 | orchestrator | Friday 02 January 2026 00:51:55 +0000 (0:00:12.234) 0:03:11.043 ******** 2026-01-02 00:52:01.381898 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:52:01.381906 | orchestrator | 2026-01-02 00:52:01.381913 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2026-01-02 00:52:01.381920 | orchestrator | Friday 02 January 2026 00:51:55 +0000 (0:00:00.471) 0:03:11.515 ******** 2026-01-02 00:52:01.381926 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:52:01.381934 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:52:01.381942 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:52:01.381951 | orchestrator | 2026-01-02 00:52:01.381958 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-02 00:52:01.381967 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-01-02 00:52:01.381977 | orchestrator | testbed-node-0 : ok=26  changed=16  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2026-01-02 00:52:01.381993 | orchestrator | testbed-node-1 : ok=24  changed=16  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-01-02 00:52:01.382000 | orchestrator | testbed-node-2 : ok=24  changed=16  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-01-02 00:52:01.382008 | orchestrator | 2026-01-02 00:52:01.382092 | orchestrator | 2026-01-02 00:52:01.382100 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-02 00:52:01.382117 | orchestrator | Friday 02 January 2026 00:51:58 +0000 (0:00:02.709) 0:03:14.225 ******** 2026-01-02 00:52:01.382123 | orchestrator | =============================================================================== 2026-01-02 00:52:01.382128 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------ 137.50s 2026-01-02 00:52:01.382144 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------ 10.89s 2026-01-02 00:52:01.382152 | orchestrator | rabbitmq : Restart rabbitmq container ----------------------------------- 4.76s 2026-01-02 00:52:01.382159 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 3.41s 2026-01-02 00:52:01.382166 | orchestrator | Check RabbitMQ service -------------------------------------------------- 3.02s 2026-01-02 00:52:01.382176 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 2.80s 2026-01-02 00:52:01.382184 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.71s 2026-01-02 00:52:01.382191 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 2.38s 2026-01-02 00:52:01.382199 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 2.07s 2026-01-02 00:52:01.382207 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 2.07s 2026-01-02 00:52:01.382217 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 1.86s 2026-01-02 00:52:01.382225 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 1.81s 2026-01-02 00:52:01.382234 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 1.59s 2026-01-02 00:52:01.382242 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 1.37s 2026-01-02 00:52:01.382250 | orchestrator | service-cert-copy : rabbitmq | Copying over extra CA certificates ------- 1.33s 2026-01-02 00:52:01.382258 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.29s 2026-01-02 00:52:01.382267 | orchestrator | rabbitmq : Restart rabbitmq container ----------------------------------- 1.25s 2026-01-02 00:52:01.382275 | orchestrator | rabbitmq : Creating rabbitmq volume ------------------------------------- 1.14s 2026-01-02 00:52:01.382282 | orchestrator | service-check-containers : rabbitmq | Check containers ------------------ 1.08s 2026-01-02 00:52:01.382290 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 0.82s 2026-01-02 00:52:01.382299 | orchestrator | 2026-01-02 00:52:01 | INFO  | Task ae7cb1e4-503c-476f-9517-35d3597afcab is in state STARTED 2026-01-02 00:52:01.382307 | orchestrator | 2026-01-02 00:52:01 | INFO  | Task a72f01af-241f-4d96-85d6-3f82360415c7 is in state STARTED 2026-01-02 00:52:01.382315 | orchestrator | 2026-01-02 00:52:01 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:52:04.410189 | orchestrator | 2026-01-02 00:52:04 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:52:04.411148 | orchestrator | 2026-01-02 00:52:04 | INFO  | Task ae7cb1e4-503c-476f-9517-35d3597afcab is in state STARTED 2026-01-02 00:52:04.411743 | orchestrator | 2026-01-02 00:52:04 | INFO  | Task a72f01af-241f-4d96-85d6-3f82360415c7 is in state STARTED 2026-01-02 00:52:04.411785 | orchestrator | 2026-01-02 00:52:04 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:52:07.457395 | orchestrator | 2026-01-02 00:52:07 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:52:07.459335 | orchestrator | 2026-01-02 00:52:07 | INFO  | Task ae7cb1e4-503c-476f-9517-35d3597afcab is in state STARTED 2026-01-02 00:52:07.460023 | orchestrator | 2026-01-02 00:52:07 | INFO  | Task a72f01af-241f-4d96-85d6-3f82360415c7 is in state STARTED 2026-01-02 00:52:07.460053 | orchestrator | 2026-01-02 00:52:07 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:52:10.500146 | orchestrator | 2026-01-02 00:52:10 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:52:10.502064 | orchestrator | 2026-01-02 00:52:10 | INFO  | Task ae7cb1e4-503c-476f-9517-35d3597afcab is in state STARTED 2026-01-02 00:52:10.504074 | orchestrator | 2026-01-02 00:52:10 | INFO  | Task a72f01af-241f-4d96-85d6-3f82360415c7 is in state STARTED 2026-01-02 00:52:10.504188 | orchestrator | 2026-01-02 00:52:10 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:52:13.566791 | orchestrator | 2026-01-02 00:52:13 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:52:13.569167 | orchestrator | 2026-01-02 00:52:13 | INFO  | Task ae7cb1e4-503c-476f-9517-35d3597afcab is in state STARTED 2026-01-02 00:52:13.572168 | orchestrator | 2026-01-02 00:52:13 | INFO  | Task a72f01af-241f-4d96-85d6-3f82360415c7 is in state STARTED 2026-01-02 00:52:13.572897 | orchestrator | 2026-01-02 00:52:13 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:52:16.631437 | orchestrator | 2026-01-02 00:52:16 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:52:16.632126 | orchestrator | 2026-01-02 00:52:16 | INFO  | Task ae7cb1e4-503c-476f-9517-35d3597afcab is in state STARTED 2026-01-02 00:52:16.633192 | orchestrator | 2026-01-02 00:52:16 | INFO  | Task a72f01af-241f-4d96-85d6-3f82360415c7 is in state STARTED 2026-01-02 00:52:16.633898 | orchestrator | 2026-01-02 00:52:16 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:52:19.670617 | orchestrator | 2026-01-02 00:52:19 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:52:19.670743 | orchestrator | 2026-01-02 00:52:19 | INFO  | Task ae7cb1e4-503c-476f-9517-35d3597afcab is in state STARTED 2026-01-02 00:52:19.671203 | orchestrator | 2026-01-02 00:52:19 | INFO  | Task a72f01af-241f-4d96-85d6-3f82360415c7 is in state STARTED 2026-01-02 00:52:19.671232 | orchestrator | 2026-01-02 00:52:19 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:52:22.706623 | orchestrator | 2026-01-02 00:52:22 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:52:22.708814 | orchestrator | 2026-01-02 00:52:22 | INFO  | Task ae7cb1e4-503c-476f-9517-35d3597afcab is in state STARTED 2026-01-02 00:52:22.710927 | orchestrator | 2026-01-02 00:52:22 | INFO  | Task a72f01af-241f-4d96-85d6-3f82360415c7 is in state STARTED 2026-01-02 00:52:22.711122 | orchestrator | 2026-01-02 00:52:22 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:52:25.753288 | orchestrator | 2026-01-02 00:52:25 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:52:25.756002 | orchestrator | 2026-01-02 00:52:25 | INFO  | Task ae7cb1e4-503c-476f-9517-35d3597afcab is in state STARTED 2026-01-02 00:52:25.756812 | orchestrator | 2026-01-02 00:52:25 | INFO  | Task a72f01af-241f-4d96-85d6-3f82360415c7 is in state STARTED 2026-01-02 00:52:25.756855 | orchestrator | 2026-01-02 00:52:25 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:52:28.794198 | orchestrator | 2026-01-02 00:52:28 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:52:28.794287 | orchestrator | 2026-01-02 00:52:28 | INFO  | Task ae7cb1e4-503c-476f-9517-35d3597afcab is in state STARTED 2026-01-02 00:52:28.794787 | orchestrator | 2026-01-02 00:52:28 | INFO  | Task a72f01af-241f-4d96-85d6-3f82360415c7 is in state STARTED 2026-01-02 00:52:28.794803 | orchestrator | 2026-01-02 00:52:28 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:52:31.824276 | orchestrator | 2026-01-02 00:52:31 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:52:31.824351 | orchestrator | 2026-01-02 00:52:31 | INFO  | Task ae7cb1e4-503c-476f-9517-35d3597afcab is in state STARTED 2026-01-02 00:52:31.825129 | orchestrator | 2026-01-02 00:52:31 | INFO  | Task a72f01af-241f-4d96-85d6-3f82360415c7 is in state STARTED 2026-01-02 00:52:31.825190 | orchestrator | 2026-01-02 00:52:31 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:52:34.851992 | orchestrator | 2026-01-02 00:52:34 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:52:34.852899 | orchestrator | 2026-01-02 00:52:34 | INFO  | Task ae7cb1e4-503c-476f-9517-35d3597afcab is in state STARTED 2026-01-02 00:52:34.853322 | orchestrator | 2026-01-02 00:52:34 | INFO  | Task a72f01af-241f-4d96-85d6-3f82360415c7 is in state STARTED 2026-01-02 00:52:34.853474 | orchestrator | 2026-01-02 00:52:34 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:52:37.877876 | orchestrator | 2026-01-02 00:52:37 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:52:37.878411 | orchestrator | 2026-01-02 00:52:37 | INFO  | Task ae7cb1e4-503c-476f-9517-35d3597afcab is in state STARTED 2026-01-02 00:52:37.880629 | orchestrator | 2026-01-02 00:52:37 | INFO  | Task a72f01af-241f-4d96-85d6-3f82360415c7 is in state STARTED 2026-01-02 00:52:37.880690 | orchestrator | 2026-01-02 00:52:37 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:52:40.918343 | orchestrator | 2026-01-02 00:52:40 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:52:40.918437 | orchestrator | 2026-01-02 00:52:40 | INFO  | Task ae7cb1e4-503c-476f-9517-35d3597afcab is in state STARTED 2026-01-02 00:52:40.919026 | orchestrator | 2026-01-02 00:52:40 | INFO  | Task a72f01af-241f-4d96-85d6-3f82360415c7 is in state STARTED 2026-01-02 00:52:40.919049 | orchestrator | 2026-01-02 00:52:40 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:52:43.945123 | orchestrator | 2026-01-02 00:52:43 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:52:43.945238 | orchestrator | 2026-01-02 00:52:43 | INFO  | Task ae7cb1e4-503c-476f-9517-35d3597afcab is in state STARTED 2026-01-02 00:52:43.945983 | orchestrator | 2026-01-02 00:52:43 | INFO  | Task a72f01af-241f-4d96-85d6-3f82360415c7 is in state STARTED 2026-01-02 00:52:43.946106 | orchestrator | 2026-01-02 00:52:43 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:52:46.981246 | orchestrator | 2026-01-02 00:52:46 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:52:46.981350 | orchestrator | 2026-01-02 00:52:46 | INFO  | Task ae7cb1e4-503c-476f-9517-35d3597afcab is in state STARTED 2026-01-02 00:52:46.983045 | orchestrator | 2026-01-02 00:52:46 | INFO  | Task a72f01af-241f-4d96-85d6-3f82360415c7 is in state STARTED 2026-01-02 00:52:46.983427 | orchestrator | 2026-01-02 00:52:46 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:52:50.013184 | orchestrator | 2026-01-02 00:52:50 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:52:50.013278 | orchestrator | 2026-01-02 00:52:50 | INFO  | Task ae7cb1e4-503c-476f-9517-35d3597afcab is in state STARTED 2026-01-02 00:52:50.014452 | orchestrator | 2026-01-02 00:52:50 | INFO  | Task a72f01af-241f-4d96-85d6-3f82360415c7 is in state STARTED 2026-01-02 00:52:50.014537 | orchestrator | 2026-01-02 00:52:50 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:52:53.046327 | orchestrator | 2026-01-02 00:52:53 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:52:53.046877 | orchestrator | 2026-01-02 00:52:53 | INFO  | Task ae7cb1e4-503c-476f-9517-35d3597afcab is in state STARTED 2026-01-02 00:52:53.047887 | orchestrator | 2026-01-02 00:52:53 | INFO  | Task a72f01af-241f-4d96-85d6-3f82360415c7 is in state STARTED 2026-01-02 00:52:53.047979 | orchestrator | 2026-01-02 00:52:53 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:52:56.072553 | orchestrator | 2026-01-02 00:52:56 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:52:56.072975 | orchestrator | 2026-01-02 00:52:56 | INFO  | Task ae7cb1e4-503c-476f-9517-35d3597afcab is in state STARTED 2026-01-02 00:52:56.073565 | orchestrator | 2026-01-02 00:52:56 | INFO  | Task a72f01af-241f-4d96-85d6-3f82360415c7 is in state STARTED 2026-01-02 00:52:56.073864 | orchestrator | 2026-01-02 00:52:56 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:52:59.107546 | orchestrator | 2026-01-02 00:52:59 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:52:59.110143 | orchestrator | 2026-01-02 00:52:59 | INFO  | Task ae7cb1e4-503c-476f-9517-35d3597afcab is in state STARTED 2026-01-02 00:52:59.115026 | orchestrator | 2026-01-02 00:52:59 | INFO  | Task a72f01af-241f-4d96-85d6-3f82360415c7 is in state SUCCESS 2026-01-02 00:52:59.117324 | orchestrator | 2026-01-02 00:52:59.117404 | orchestrator | 2026-01-02 00:52:59.117410 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-02 00:52:59.117415 | orchestrator | 2026-01-02 00:52:59.117419 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-02 00:52:59.117424 | orchestrator | Friday 02 January 2026 00:49:39 +0000 (0:00:00.165) 0:00:00.165 ******** 2026-01-02 00:52:59.117428 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:52:59.117434 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:52:59.117438 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:52:59.117442 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:52:59.117446 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:52:59.117450 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:52:59.117454 | orchestrator | 2026-01-02 00:52:59.117458 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-02 00:52:59.117486 | orchestrator | Friday 02 January 2026 00:49:40 +0000 (0:00:00.612) 0:00:00.778 ******** 2026-01-02 00:52:59.117491 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2026-01-02 00:52:59.117495 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2026-01-02 00:52:59.117499 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2026-01-02 00:52:59.117503 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2026-01-02 00:52:59.117507 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2026-01-02 00:52:59.117510 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2026-01-02 00:52:59.117514 | orchestrator | 2026-01-02 00:52:59.117518 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2026-01-02 00:52:59.117522 | orchestrator | 2026-01-02 00:52:59.117526 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2026-01-02 00:52:59.117530 | orchestrator | Friday 02 January 2026 00:49:41 +0000 (0:00:00.880) 0:00:01.658 ******** 2026-01-02 00:52:59.117540 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-02 00:52:59.117545 | orchestrator | 2026-01-02 00:52:59.117549 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2026-01-02 00:52:59.117553 | orchestrator | Friday 02 January 2026 00:49:42 +0000 (0:00:01.461) 0:00:03.119 ******** 2026-01-02 00:52:59.117558 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:52:59.117593 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:52:59.117598 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:52:59.117675 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:52:59.117680 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:52:59.117684 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:52:59.117688 | orchestrator | 2026-01-02 00:52:59.117699 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2026-01-02 00:52:59.117704 | orchestrator | Friday 02 January 2026 00:49:44 +0000 (0:00:01.338) 0:00:04.458 ******** 2026-01-02 00:52:59.117708 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:52:59.117712 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:52:59.117718 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:52:59.117736 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:52:59.117744 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:52:59.117748 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:52:59.117752 | orchestrator | 2026-01-02 00:52:59.117756 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2026-01-02 00:52:59.117760 | orchestrator | Friday 02 January 2026 00:49:46 +0000 (0:00:01.760) 0:00:06.219 ******** 2026-01-02 00:52:59.117764 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:52:59.117768 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:52:59.117776 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:52:59.117780 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:52:59.117825 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:52:59.117832 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:52:59.117840 | orchestrator | 2026-01-02 00:52:59.117844 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2026-01-02 00:52:59.117848 | orchestrator | Friday 02 January 2026 00:49:47 +0000 (0:00:01.437) 0:00:07.656 ******** 2026-01-02 00:52:59.117851 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:52:59.117855 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:52:59.117859 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:52:59.117863 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:52:59.117867 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:52:59.117871 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:52:59.117875 | orchestrator | 2026-01-02 00:52:59.117881 | orchestrator | TASK [service-check-containers : ovn_controller | Check containers] ************ 2026-01-02 00:52:59.117885 | orchestrator | Friday 02 January 2026 00:49:49 +0000 (0:00:01.728) 0:00:09.384 ******** 2026-01-02 00:52:59.117889 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:52:59.117893 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:52:59.117902 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:52:59.117920 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:52:59.117925 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:52:59.117930 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:52:59.117934 | orchestrator | 2026-01-02 00:52:59.117939 | orchestrator | TASK [service-check-containers : ovn_controller | Notify handlers to restart containers] *** 2026-01-02 00:52:59.117944 | orchestrator | Friday 02 January 2026 00:49:50 +0000 (0:00:01.471) 0:00:10.856 ******** 2026-01-02 00:52:59.117949 | orchestrator | changed: [testbed-node-0] => { 2026-01-02 00:52:59.117953 | orchestrator |  "msg": "Notifying handlers" 2026-01-02 00:52:59.117958 | orchestrator | } 2026-01-02 00:52:59.117962 | orchestrator | changed: [testbed-node-1] => { 2026-01-02 00:52:59.117967 | orchestrator |  "msg": "Notifying handlers" 2026-01-02 00:52:59.117971 | orchestrator | } 2026-01-02 00:52:59.117975 | orchestrator | changed: [testbed-node-2] => { 2026-01-02 00:52:59.117980 | orchestrator |  "msg": "Notifying handlers" 2026-01-02 00:52:59.117984 | orchestrator | } 2026-01-02 00:52:59.117989 | orchestrator | changed: [testbed-node-3] => { 2026-01-02 00:52:59.117993 | orchestrator |  "msg": "Notifying handlers" 2026-01-02 00:52:59.117997 | orchestrator | } 2026-01-02 00:52:59.118001 | orchestrator | changed: [testbed-node-4] => { 2026-01-02 00:52:59.118006 | orchestrator |  "msg": "Notifying handlers" 2026-01-02 00:52:59.118010 | orchestrator | } 2026-01-02 00:52:59.118050 | orchestrator | changed: [testbed-node-5] => { 2026-01-02 00:52:59.118055 | orchestrator |  "msg": "Notifying handlers" 2026-01-02 00:52:59.118059 | orchestrator | } 2026-01-02 00:52:59.118064 | orchestrator | 2026-01-02 00:52:59.118068 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-01-02 00:52:59.118073 | orchestrator | Friday 02 January 2026 00:49:51 +0000 (0:00:00.928) 0:00:11.785 ******** 2026-01-02 00:52:59.118078 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 00:52:59.118083 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:52:59.118095 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 00:52:59.118100 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:52:59.118105 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 00:52:59.118109 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:52:59.118116 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 00:52:59.118121 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:52:59.118125 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 00:52:59.118130 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:52:59.118134 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 00:52:59.118139 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:52:59.118143 | orchestrator | 2026-01-02 00:52:59.118148 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2026-01-02 00:52:59.118152 | orchestrator | Friday 02 January 2026 00:49:53 +0000 (0:00:01.490) 0:00:13.276 ******** 2026-01-02 00:52:59.118157 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:52:59.118161 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:52:59.118165 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:52:59.118170 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:52:59.118175 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:52:59.118179 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:52:59.118183 | orchestrator | 2026-01-02 00:52:59.118187 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2026-01-02 00:52:59.118192 | orchestrator | Friday 02 January 2026 00:49:55 +0000 (0:00:02.822) 0:00:16.099 ******** 2026-01-02 00:52:59.118196 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2026-01-02 00:52:59.118201 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2026-01-02 00:52:59.118205 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2026-01-02 00:52:59.118210 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2026-01-02 00:52:59.118214 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2026-01-02 00:52:59.118222 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2026-01-02 00:52:59.118227 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-01-02 00:52:59.118231 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-01-02 00:52:59.118236 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-01-02 00:52:59.118240 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-01-02 00:52:59.118244 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-01-02 00:52:59.118249 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-01-02 00:52:59.118256 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-01-02 00:52:59.118262 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-01-02 00:52:59.118267 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-01-02 00:52:59.118271 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-01-02 00:52:59.118276 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-01-02 00:52:59.118281 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-01-02 00:52:59.118285 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-01-02 00:52:59.118291 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-01-02 00:52:59.118296 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-01-02 00:52:59.118299 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-01-02 00:52:59.118305 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-01-02 00:52:59.118309 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-01-02 00:52:59.118313 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-01-02 00:52:59.118317 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-01-02 00:52:59.118320 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-01-02 00:52:59.118324 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-01-02 00:52:59.118328 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-01-02 00:52:59.118332 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-01-02 00:52:59.118336 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-01-02 00:52:59.118340 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-01-02 00:52:59.118343 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-01-02 00:52:59.118347 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-01-02 00:52:59.118354 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-01-02 00:52:59.118358 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-01-02 00:52:59.118362 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-01-02 00:52:59.118366 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-01-02 00:52:59.118370 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-01-02 00:52:59.118374 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-01-02 00:52:59.118377 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-01-02 00:52:59.118381 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-01-02 00:52:59.118385 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2026-01-02 00:52:59.118389 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2026-01-02 00:52:59.118393 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2026-01-02 00:52:59.118397 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2026-01-02 00:52:59.118401 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2026-01-02 00:52:59.118407 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-01-02 00:52:59.118411 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-01-02 00:52:59.118415 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2026-01-02 00:52:59.118419 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-01-02 00:52:59.118423 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-01-02 00:52:59.118426 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-01-02 00:52:59.118430 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-01-02 00:52:59.118434 | orchestrator | 2026-01-02 00:52:59.118438 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-01-02 00:52:59.118442 | orchestrator | Friday 02 January 2026 00:50:18 +0000 (0:00:22.770) 0:00:38.869 ******** 2026-01-02 00:52:59.118446 | orchestrator | 2026-01-02 00:52:59.118449 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-01-02 00:52:59.118453 | orchestrator | Friday 02 January 2026 00:50:18 +0000 (0:00:00.052) 0:00:38.922 ******** 2026-01-02 00:52:59.118457 | orchestrator | 2026-01-02 00:52:59.118461 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-01-02 00:52:59.118467 | orchestrator | Friday 02 January 2026 00:50:18 +0000 (0:00:00.053) 0:00:38.976 ******** 2026-01-02 00:52:59.118470 | orchestrator | 2026-01-02 00:52:59.118474 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-01-02 00:52:59.118478 | orchestrator | Friday 02 January 2026 00:50:18 +0000 (0:00:00.071) 0:00:39.048 ******** 2026-01-02 00:52:59.118488 | orchestrator | 2026-01-02 00:52:59.118492 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-01-02 00:52:59.118495 | orchestrator | Friday 02 January 2026 00:50:18 +0000 (0:00:00.074) 0:00:39.122 ******** 2026-01-02 00:52:59.118499 | orchestrator | 2026-01-02 00:52:59.118503 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-01-02 00:52:59.118507 | orchestrator | Friday 02 January 2026 00:50:18 +0000 (0:00:00.054) 0:00:39.177 ******** 2026-01-02 00:52:59.118510 | orchestrator | 2026-01-02 00:52:59.118514 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2026-01-02 00:52:59.118518 | orchestrator | Friday 02 January 2026 00:50:19 +0000 (0:00:00.083) 0:00:39.261 ******** 2026-01-02 00:52:59.118522 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:52:59.118526 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:52:59.118530 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:52:59.118533 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:52:59.118537 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:52:59.118541 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:52:59.118545 | orchestrator | 2026-01-02 00:52:59.118549 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2026-01-02 00:52:59.118553 | orchestrator | Friday 02 January 2026 00:50:21 +0000 (0:00:02.146) 0:00:41.408 ******** 2026-01-02 00:52:59.118556 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:52:59.118560 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:52:59.118564 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:52:59.118568 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:52:59.118571 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:52:59.118575 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:52:59.118579 | orchestrator | 2026-01-02 00:52:59.118583 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2026-01-02 00:52:59.118589 | orchestrator | 2026-01-02 00:52:59.118595 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-01-02 00:52:59.118601 | orchestrator | Friday 02 January 2026 00:50:30 +0000 (0:00:08.903) 0:00:50.311 ******** 2026-01-02 00:52:59.118608 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:52:59.118614 | orchestrator | 2026-01-02 00:52:59.118620 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-01-02 00:52:59.118630 | orchestrator | Friday 02 January 2026 00:50:30 +0000 (0:00:00.768) 0:00:51.079 ******** 2026-01-02 00:52:59.118638 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:52:59.118644 | orchestrator | 2026-01-02 00:52:59.118650 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2026-01-02 00:52:59.118656 | orchestrator | Friday 02 January 2026 00:50:31 +0000 (0:00:00.846) 0:00:51.926 ******** 2026-01-02 00:52:59.118663 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:52:59.118669 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:52:59.118675 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:52:59.118683 | orchestrator | 2026-01-02 00:52:59.118689 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2026-01-02 00:52:59.118696 | orchestrator | Friday 02 January 2026 00:50:32 +0000 (0:00:01.032) 0:00:52.959 ******** 2026-01-02 00:52:59.118701 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:52:59.118705 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:52:59.118708 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:52:59.118712 | orchestrator | 2026-01-02 00:52:59.118716 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2026-01-02 00:52:59.118720 | orchestrator | Friday 02 January 2026 00:50:33 +0000 (0:00:00.443) 0:00:53.402 ******** 2026-01-02 00:52:59.118723 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:52:59.118727 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:52:59.118731 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:52:59.118735 | orchestrator | 2026-01-02 00:52:59.118738 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2026-01-02 00:52:59.118751 | orchestrator | Friday 02 January 2026 00:50:33 +0000 (0:00:00.515) 0:00:53.918 ******** 2026-01-02 00:52:59.118755 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:52:59.118759 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:52:59.118763 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:52:59.118766 | orchestrator | 2026-01-02 00:52:59.118771 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2026-01-02 00:52:59.118774 | orchestrator | Friday 02 January 2026 00:50:34 +0000 (0:00:00.339) 0:00:54.257 ******** 2026-01-02 00:52:59.118778 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:52:59.118800 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:52:59.118804 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:52:59.118808 | orchestrator | 2026-01-02 00:52:59.118812 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2026-01-02 00:52:59.118816 | orchestrator | Friday 02 January 2026 00:50:34 +0000 (0:00:00.375) 0:00:54.633 ******** 2026-01-02 00:52:59.118820 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:52:59.118823 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:52:59.118827 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:52:59.118831 | orchestrator | 2026-01-02 00:52:59.118835 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2026-01-02 00:52:59.118839 | orchestrator | Friday 02 January 2026 00:50:34 +0000 (0:00:00.279) 0:00:54.913 ******** 2026-01-02 00:52:59.118842 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:52:59.118846 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:52:59.118850 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:52:59.118854 | orchestrator | 2026-01-02 00:52:59.118857 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2026-01-02 00:52:59.118861 | orchestrator | Friday 02 January 2026 00:50:35 +0000 (0:00:00.382) 0:00:55.295 ******** 2026-01-02 00:52:59.118865 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:52:59.118869 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:52:59.118873 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:52:59.118876 | orchestrator | 2026-01-02 00:52:59.118883 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2026-01-02 00:52:59.118887 | orchestrator | Friday 02 January 2026 00:50:35 +0000 (0:00:00.242) 0:00:55.538 ******** 2026-01-02 00:52:59.118891 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:52:59.118895 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:52:59.118898 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:52:59.118902 | orchestrator | 2026-01-02 00:52:59.118906 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2026-01-02 00:52:59.118910 | orchestrator | Friday 02 January 2026 00:50:35 +0000 (0:00:00.234) 0:00:55.772 ******** 2026-01-02 00:52:59.118913 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:52:59.118917 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:52:59.118921 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:52:59.118925 | orchestrator | 2026-01-02 00:52:59.118928 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2026-01-02 00:52:59.118932 | orchestrator | Friday 02 January 2026 00:50:35 +0000 (0:00:00.244) 0:00:56.016 ******** 2026-01-02 00:52:59.118936 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:52:59.118940 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:52:59.118944 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:52:59.118948 | orchestrator | 2026-01-02 00:52:59.118951 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2026-01-02 00:52:59.118955 | orchestrator | Friday 02 January 2026 00:50:36 +0000 (0:00:00.325) 0:00:56.342 ******** 2026-01-02 00:52:59.118959 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:52:59.118963 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:52:59.118966 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:52:59.118970 | orchestrator | 2026-01-02 00:52:59.118974 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2026-01-02 00:52:59.118981 | orchestrator | Friday 02 January 2026 00:50:36 +0000 (0:00:00.233) 0:00:56.575 ******** 2026-01-02 00:52:59.118985 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:52:59.118989 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:52:59.118993 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:52:59.118996 | orchestrator | 2026-01-02 00:52:59.119000 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2026-01-02 00:52:59.119004 | orchestrator | Friday 02 January 2026 00:50:36 +0000 (0:00:00.260) 0:00:56.836 ******** 2026-01-02 00:52:59.119008 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:52:59.119011 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:52:59.119015 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:52:59.119019 | orchestrator | 2026-01-02 00:52:59.119023 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2026-01-02 00:52:59.119027 | orchestrator | Friday 02 January 2026 00:50:36 +0000 (0:00:00.257) 0:00:57.094 ******** 2026-01-02 00:52:59.119030 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:52:59.119034 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:52:59.119038 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:52:59.119042 | orchestrator | 2026-01-02 00:52:59.119045 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2026-01-02 00:52:59.119049 | orchestrator | Friday 02 January 2026 00:50:37 +0000 (0:00:00.283) 0:00:57.377 ******** 2026-01-02 00:52:59.119053 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:52:59.119057 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:52:59.119060 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:52:59.119064 | orchestrator | 2026-01-02 00:52:59.119068 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2026-01-02 00:52:59.119072 | orchestrator | Friday 02 January 2026 00:50:37 +0000 (0:00:00.418) 0:00:57.796 ******** 2026-01-02 00:52:59.119076 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:52:59.119080 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:52:59.119083 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:52:59.119087 | orchestrator | 2026-01-02 00:52:59.119091 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-01-02 00:52:59.119095 | orchestrator | Friday 02 January 2026 00:50:37 +0000 (0:00:00.225) 0:00:58.021 ******** 2026-01-02 00:52:59.119099 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:52:59.119102 | orchestrator | 2026-01-02 00:52:59.119109 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2026-01-02 00:52:59.119113 | orchestrator | Friday 02 January 2026 00:50:38 +0000 (0:00:00.523) 0:00:58.545 ******** 2026-01-02 00:52:59.119117 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:52:59.119120 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:52:59.119124 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:52:59.119128 | orchestrator | 2026-01-02 00:52:59.119132 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2026-01-02 00:52:59.119136 | orchestrator | Friday 02 January 2026 00:50:38 +0000 (0:00:00.519) 0:00:59.065 ******** 2026-01-02 00:52:59.119139 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:52:59.119143 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:52:59.119147 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:52:59.119151 | orchestrator | 2026-01-02 00:52:59.119155 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2026-01-02 00:52:59.119158 | orchestrator | Friday 02 January 2026 00:50:39 +0000 (0:00:00.331) 0:00:59.396 ******** 2026-01-02 00:52:59.119162 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:52:59.119166 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:52:59.119170 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:52:59.119174 | orchestrator | 2026-01-02 00:52:59.119178 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2026-01-02 00:52:59.119181 | orchestrator | Friday 02 January 2026 00:50:39 +0000 (0:00:00.275) 0:00:59.672 ******** 2026-01-02 00:52:59.119185 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:52:59.119192 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:52:59.119196 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:52:59.119200 | orchestrator | 2026-01-02 00:52:59.119204 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2026-01-02 00:52:59.119208 | orchestrator | Friday 02 January 2026 00:50:39 +0000 (0:00:00.263) 0:00:59.936 ******** 2026-01-02 00:52:59.119212 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:52:59.119215 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:52:59.119221 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:52:59.119225 | orchestrator | 2026-01-02 00:52:59.119229 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2026-01-02 00:52:59.119233 | orchestrator | Friday 02 January 2026 00:50:40 +0000 (0:00:00.454) 0:01:00.390 ******** 2026-01-02 00:52:59.119237 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:52:59.119240 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:52:59.119244 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:52:59.119248 | orchestrator | 2026-01-02 00:52:59.119252 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2026-01-02 00:52:59.119267 | orchestrator | Friday 02 January 2026 00:50:40 +0000 (0:00:00.311) 0:01:00.702 ******** 2026-01-02 00:52:59.119272 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:52:59.119276 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:52:59.119279 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:52:59.119283 | orchestrator | 2026-01-02 00:52:59.119287 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2026-01-02 00:52:59.119291 | orchestrator | Friday 02 January 2026 00:50:40 +0000 (0:00:00.313) 0:01:01.015 ******** 2026-01-02 00:52:59.119295 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:52:59.119299 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:52:59.119302 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:52:59.119306 | orchestrator | 2026-01-02 00:52:59.119310 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-01-02 00:52:59.119314 | orchestrator | Friday 02 January 2026 00:50:41 +0000 (0:00:00.341) 0:01:01.357 ******** 2026-01-02 00:52:59.119320 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:52:59.119327 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:52:59.119331 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:52:59.119338 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:52:59.119346 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:52:59.119352 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:52:59.119356 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:52:59.119360 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 00:52:59.119365 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:52:59.119370 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:52:59.119374 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 00:52:59.119384 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 00:52:59.119388 | orchestrator | 2026-01-02 00:52:59.119392 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-01-02 00:52:59.119396 | orchestrator | Friday 02 January 2026 00:50:44 +0000 (0:00:03.259) 0:01:04.616 ******** 2026-01-02 00:52:59.119400 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:52:59.119405 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:52:59.119424 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:52:59.119428 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:52:59.119432 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:52:59.119436 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:52:59.119443 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:52:59.119451 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 00:52:59.119455 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:52:59.119461 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 00:52:59.119465 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:52:59.119470 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 00:52:59.119473 | orchestrator | 2026-01-02 00:52:59.119477 | orchestrator | TASK [ovn-db : Ensure configuration for relays exists] ************************* 2026-01-02 00:52:59.119481 | orchestrator | Friday 02 January 2026 00:50:49 +0000 (0:00:05.162) 0:01:09.779 ******** 2026-01-02 00:52:59.119485 | orchestrator | included: /ansible/roles/ovn-db/tasks/config-relay.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=1) 2026-01-02 00:52:59.119489 | orchestrator | 2026-01-02 00:52:59.119493 | orchestrator | TASK [ovn-db : Ensuring config directories exist for OVN relay containers] ***** 2026-01-02 00:52:59.119497 | orchestrator | Friday 02 January 2026 00:50:50 +0000 (0:00:00.541) 0:01:10.321 ******** 2026-01-02 00:52:59.119501 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:52:59.119505 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:52:59.119508 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:52:59.119512 | orchestrator | 2026-01-02 00:52:59.119516 | orchestrator | TASK [ovn-db : Copying over config.json files for OVN relay services] ********** 2026-01-02 00:52:59.119520 | orchestrator | Friday 02 January 2026 00:50:51 +0000 (0:00:00.959) 0:01:11.280 ******** 2026-01-02 00:52:59.119528 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:52:59.119531 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:52:59.119535 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:52:59.119539 | orchestrator | 2026-01-02 00:52:59.119543 | orchestrator | TASK [ovn-db : Generate config files for OVN relay services] ******************* 2026-01-02 00:52:59.119547 | orchestrator | Friday 02 January 2026 00:50:52 +0000 (0:00:01.914) 0:01:13.195 ******** 2026-01-02 00:52:59.119551 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:52:59.119554 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:52:59.119558 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:52:59.119562 | orchestrator | 2026-01-02 00:52:59.119566 | orchestrator | TASK [service-check-containers : ovn_db | Check containers] ******************** 2026-01-02 00:52:59.119569 | orchestrator | Friday 02 January 2026 00:50:54 +0000 (0:00:01.885) 0:01:15.080 ******** 2026-01-02 00:52:59.119578 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:52:59.119585 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:52:59.119592 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:52:59.119600 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:52:59.119607 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:52:59.119613 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:52:59.119625 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:52:59.119632 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 00:52:59.119643 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:52:59.119649 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 00:52:59.119657 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:52:59.119666 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 00:52:59.119670 | orchestrator | 2026-01-02 00:52:59.119674 | orchestrator | TASK [service-check-containers : ovn_db | Notify handlers to restart containers] *** 2026-01-02 00:52:59.119678 | orchestrator | Friday 02 January 2026 00:50:58 +0000 (0:00:03.885) 0:01:18.965 ******** 2026-01-02 00:52:59.119682 | orchestrator | changed: [testbed-node-0] => { 2026-01-02 00:52:59.119686 | orchestrator |  "msg": "Notifying handlers" 2026-01-02 00:52:59.119690 | orchestrator | } 2026-01-02 00:52:59.119694 | orchestrator | changed: [testbed-node-1] => { 2026-01-02 00:52:59.119697 | orchestrator |  "msg": "Notifying handlers" 2026-01-02 00:52:59.119701 | orchestrator | } 2026-01-02 00:52:59.119705 | orchestrator | changed: [testbed-node-2] => { 2026-01-02 00:52:59.119709 | orchestrator |  "msg": "Notifying handlers" 2026-01-02 00:52:59.119716 | orchestrator | } 2026-01-02 00:52:59.119719 | orchestrator | 2026-01-02 00:52:59.119723 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-01-02 00:52:59.119727 | orchestrator | Friday 02 January 2026 00:50:59 +0000 (0:00:00.296) 0:01:19.262 ******** 2026-01-02 00:52:59.119731 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 00:52:59.119735 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 00:52:59.119739 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 00:52:59.119751 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 00:52:59.119756 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 00:52:59.119760 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 00:52:59.119766 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 00:52:59.119773 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 00:52:59.119777 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 00:52:59.119781 | orchestrator | included: /ansible/roles/service-check-containers/tasks/iterated.yml for testbed-node-1, testbed-node-0, testbed-node-2 => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:52:59.119811 | orchestrator | 2026-01-02 00:52:59.119815 | orchestrator | TASK [service-check-containers : ovn_db | Check containers with iteration] ***** 2026-01-02 00:52:59.119819 | orchestrator | Friday 02 January 2026 00:51:01 +0000 (0:00:02.164) 0:01:21.426 ******** 2026-01-02 00:52:59.119823 | orchestrator | changed: [testbed-node-0] => (item=[1]) 2026-01-02 00:52:59.119827 | orchestrator | changed: [testbed-node-1] => (item=[1]) 2026-01-02 00:52:59.119831 | orchestrator | changed: [testbed-node-2] => (item=[1]) 2026-01-02 00:52:59.119835 | orchestrator | 2026-01-02 00:52:59.119839 | orchestrator | TASK [service-check-containers : ovn_db | Notify handlers to restart containers] *** 2026-01-02 00:52:59.119843 | orchestrator | Friday 02 January 2026 00:51:02 +0000 (0:00:00.982) 0:01:22.409 ******** 2026-01-02 00:52:59.119846 | orchestrator | changed: [testbed-node-0] => { 2026-01-02 00:52:59.119850 | orchestrator |  "msg": "Notifying handlers" 2026-01-02 00:52:59.119854 | orchestrator | } 2026-01-02 00:52:59.119858 | orchestrator | changed: [testbed-node-1] => { 2026-01-02 00:52:59.119861 | orchestrator |  "msg": "Notifying handlers" 2026-01-02 00:52:59.119865 | orchestrator | } 2026-01-02 00:52:59.119869 | orchestrator | changed: [testbed-node-2] => { 2026-01-02 00:52:59.119873 | orchestrator |  "msg": "Notifying handlers" 2026-01-02 00:52:59.119879 | orchestrator | } 2026-01-02 00:52:59.119883 | orchestrator | 2026-01-02 00:52:59.119887 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-01-02 00:52:59.119891 | orchestrator | Friday 02 January 2026 00:51:03 +0000 (0:00:00.825) 0:01:23.235 ******** 2026-01-02 00:52:59.119895 | orchestrator | 2026-01-02 00:52:59.119899 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-01-02 00:52:59.119902 | orchestrator | Friday 02 January 2026 00:51:03 +0000 (0:00:00.088) 0:01:23.323 ******** 2026-01-02 00:52:59.119906 | orchestrator | 2026-01-02 00:52:59.119910 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-01-02 00:52:59.119914 | orchestrator | Friday 02 January 2026 00:51:03 +0000 (0:00:00.064) 0:01:23.388 ******** 2026-01-02 00:52:59.119917 | orchestrator | 2026-01-02 00:52:59.119921 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-01-02 00:52:59.119925 | orchestrator | Friday 02 January 2026 00:51:03 +0000 (0:00:00.062) 0:01:23.450 ******** 2026-01-02 00:52:59.119929 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:52:59.119936 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:52:59.119940 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:52:59.119944 | orchestrator | 2026-01-02 00:52:59.119948 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-01-02 00:52:59.119951 | orchestrator | Friday 02 January 2026 00:51:16 +0000 (0:00:12.942) 0:01:36.393 ******** 2026-01-02 00:52:59.119955 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:52:59.119959 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:52:59.119963 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:52:59.119967 | orchestrator | 2026-01-02 00:52:59.119970 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db-relay container] ******************* 2026-01-02 00:52:59.119974 | orchestrator | Friday 02 January 2026 00:51:25 +0000 (0:00:08.987) 0:01:45.380 ******** 2026-01-02 00:52:59.119978 | orchestrator | changed: [testbed-node-0] => (item=1) 2026-01-02 00:52:59.119984 | orchestrator | changed: [testbed-node-1] => (item=1) 2026-01-02 00:52:59.119988 | orchestrator | changed: [testbed-node-2] => (item=1) 2026-01-02 00:52:59.119992 | orchestrator | 2026-01-02 00:52:59.119995 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-01-02 00:52:59.119999 | orchestrator | Friday 02 January 2026 00:51:32 +0000 (0:00:07.785) 0:01:53.166 ******** 2026-01-02 00:52:59.120003 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:52:59.120007 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:52:59.120010 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:52:59.120014 | orchestrator | 2026-01-02 00:52:59.120018 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-01-02 00:52:59.120022 | orchestrator | Friday 02 January 2026 00:51:46 +0000 (0:00:13.413) 0:02:06.579 ******** 2026-01-02 00:52:59.120025 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:52:59.120029 | orchestrator | 2026-01-02 00:52:59.120033 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-01-02 00:52:59.120037 | orchestrator | Friday 02 January 2026 00:51:46 +0000 (0:00:00.133) 0:02:06.713 ******** 2026-01-02 00:52:59.120041 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:52:59.120044 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:52:59.120048 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:52:59.120052 | orchestrator | 2026-01-02 00:52:59.120056 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-01-02 00:52:59.120059 | orchestrator | Friday 02 January 2026 00:51:47 +0000 (0:00:00.778) 0:02:07.492 ******** 2026-01-02 00:52:59.120063 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:52:59.120067 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:52:59.120071 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:52:59.120074 | orchestrator | 2026-01-02 00:52:59.120078 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-01-02 00:52:59.120082 | orchestrator | Friday 02 January 2026 00:51:47 +0000 (0:00:00.646) 0:02:08.138 ******** 2026-01-02 00:52:59.120086 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:52:59.120090 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:52:59.120093 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:52:59.120097 | orchestrator | 2026-01-02 00:52:59.120101 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-01-02 00:52:59.120105 | orchestrator | Friday 02 January 2026 00:51:48 +0000 (0:00:00.891) 0:02:09.029 ******** 2026-01-02 00:52:59.120108 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:52:59.120112 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:52:59.120116 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:52:59.120119 | orchestrator | 2026-01-02 00:52:59.120123 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-01-02 00:52:59.120127 | orchestrator | Friday 02 January 2026 00:51:49 +0000 (0:00:00.608) 0:02:09.637 ******** 2026-01-02 00:52:59.120131 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:52:59.120134 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:52:59.120138 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:52:59.120142 | orchestrator | 2026-01-02 00:52:59.120149 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-01-02 00:52:59.120152 | orchestrator | Friday 02 January 2026 00:51:50 +0000 (0:00:00.744) 0:02:10.382 ******** 2026-01-02 00:52:59.120156 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:52:59.120160 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:52:59.120164 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:52:59.120167 | orchestrator | 2026-01-02 00:52:59.120171 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db-relay] *************************************** 2026-01-02 00:52:59.120175 | orchestrator | Friday 02 January 2026 00:51:50 +0000 (0:00:00.742) 0:02:11.124 ******** 2026-01-02 00:52:59.120179 | orchestrator | ok: [testbed-node-0] => (item=1) 2026-01-02 00:52:59.120182 | orchestrator | ok: [testbed-node-1] => (item=1) 2026-01-02 00:52:59.120186 | orchestrator | ok: [testbed-node-2] => (item=1) 2026-01-02 00:52:59.120190 | orchestrator | 2026-01-02 00:52:59.120194 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2026-01-02 00:52:59.120198 | orchestrator | Friday 02 January 2026 00:51:51 +0000 (0:00:00.870) 0:02:11.994 ******** 2026-01-02 00:52:59.120202 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:52:59.120205 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:52:59.120209 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:52:59.120213 | orchestrator | 2026-01-02 00:52:59.120217 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-01-02 00:52:59.120223 | orchestrator | Friday 02 January 2026 00:51:52 +0000 (0:00:00.333) 0:02:12.328 ******** 2026-01-02 00:52:59.120227 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:52:59.120231 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:52:59.120237 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:52:59.120241 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:52:59.120245 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 00:52:59.120252 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:52:59.120256 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:52:59.120263 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:52:59.120267 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:52:59.120271 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 00:52:59.120277 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:52:59.120281 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 00:52:59.120285 | orchestrator | 2026-01-02 00:52:59.120289 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-01-02 00:52:59.120293 | orchestrator | Friday 02 January 2026 00:51:55 +0000 (0:00:03.735) 0:02:16.064 ******** 2026-01-02 00:52:59.120300 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:52:59.120304 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:52:59.120308 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:52:59.120316 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:52:59.120320 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:52:59.120324 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:52:59.120330 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:52:59.120334 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 00:52:59.120340 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:52:59.120344 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 00:52:59.120348 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:52:59.120355 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 00:52:59.120359 | orchestrator | 2026-01-02 00:52:59.120363 | orchestrator | TASK [ovn-db : Ensure configuration for relays exists] ************************* 2026-01-02 00:52:59.120367 | orchestrator | Friday 02 January 2026 00:52:01 +0000 (0:00:05.390) 0:02:21.454 ******** 2026-01-02 00:52:59.120371 | orchestrator | included: /ansible/roles/ovn-db/tasks/config-relay.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=1) 2026-01-02 00:52:59.120375 | orchestrator | 2026-01-02 00:52:59.120379 | orchestrator | TASK [ovn-db : Ensuring config directories exist for OVN relay containers] ***** 2026-01-02 00:52:59.120382 | orchestrator | Friday 02 January 2026 00:52:01 +0000 (0:00:00.609) 0:02:22.064 ******** 2026-01-02 00:52:59.120386 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:52:59.120390 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:52:59.120393 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:52:59.120397 | orchestrator | 2026-01-02 00:52:59.120401 | orchestrator | TASK [ovn-db : Copying over config.json files for OVN relay services] ********** 2026-01-02 00:52:59.120405 | orchestrator | Friday 02 January 2026 00:52:02 +0000 (0:00:00.634) 0:02:22.698 ******** 2026-01-02 00:52:59.120408 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:52:59.120412 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:52:59.120416 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:52:59.120419 | orchestrator | 2026-01-02 00:52:59.120423 | orchestrator | TASK [ovn-db : Generate config files for OVN relay services] ******************* 2026-01-02 00:52:59.120427 | orchestrator | Friday 02 January 2026 00:52:03 +0000 (0:00:01.381) 0:02:24.079 ******** 2026-01-02 00:52:59.120431 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:52:59.120435 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:52:59.120438 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:52:59.120442 | orchestrator | 2026-01-02 00:52:59.120446 | orchestrator | TASK [service-check-containers : ovn_db | Check containers] ******************** 2026-01-02 00:52:59.120452 | orchestrator | Friday 02 January 2026 00:52:05 +0000 (0:00:01.841) 0:02:25.921 ******** 2026-01-02 00:52:59.120459 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:52:59.120463 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:52:59.120467 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:52:59.120471 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:52:59.120475 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:52:59.120481 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:52:59.120485 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:52:59.120489 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 00:52:59.120500 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:52:59.120504 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 00:52:59.120508 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:52:59.120512 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 00:52:59.120515 | orchestrator | 2026-01-02 00:52:59.120519 | orchestrator | TASK [service-check-containers : ovn_db | Notify handlers to restart containers] *** 2026-01-02 00:52:59.120523 | orchestrator | Friday 02 January 2026 00:52:10 +0000 (0:00:05.003) 0:02:30.924 ******** 2026-01-02 00:52:59.120527 | orchestrator | ok: [testbed-node-0] => { 2026-01-02 00:52:59.120531 | orchestrator |  "msg": "Notifying handlers" 2026-01-02 00:52:59.120535 | orchestrator | } 2026-01-02 00:52:59.120538 | orchestrator | changed: [testbed-node-1] => { 2026-01-02 00:52:59.120542 | orchestrator |  "msg": "Notifying handlers" 2026-01-02 00:52:59.120546 | orchestrator | } 2026-01-02 00:52:59.120550 | orchestrator | changed: [testbed-node-2] => { 2026-01-02 00:52:59.120553 | orchestrator |  "msg": "Notifying handlers" 2026-01-02 00:52:59.120557 | orchestrator | } 2026-01-02 00:52:59.120561 | orchestrator | 2026-01-02 00:52:59.120565 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-01-02 00:52:59.120568 | orchestrator | Friday 02 January 2026 00:52:11 +0000 (0:00:00.349) 0:02:31.274 ******** 2026-01-02 00:52:59.120576 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 00:52:59.120589 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 00:52:59.120599 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 00:52:59.120606 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 00:52:59.120613 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 00:52:59.120620 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 00:52:59.120628 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 00:52:59.120635 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 00:52:59.120645 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 00:52:59.120658 | orchestrator | included: /ansible/roles/service-check-containers/tasks/iterated.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 00:52:59.120665 | orchestrator | 2026-01-02 00:52:59.120672 | orchestrator | TASK [service-check-containers : ovn_db | Check containers with iteration] ***** 2026-01-02 00:52:59.120679 | orchestrator | Friday 02 January 2026 00:52:13 +0000 (0:00:02.221) 0:02:33.496 ******** 2026-01-02 00:52:59.120685 | orchestrator | changed: [testbed-node-0] => (item=[1]) 2026-01-02 00:52:59.120689 | orchestrator | changed: [testbed-node-1] => (item=[1]) 2026-01-02 00:52:59.120692 | orchestrator | changed: [testbed-node-2] => (item=[1]) 2026-01-02 00:52:59.120696 | orchestrator | 2026-01-02 00:52:59.120702 | orchestrator | TASK [service-check-containers : ovn_db | Notify handlers to restart containers] *** 2026-01-02 00:52:59.120706 | orchestrator | Friday 02 January 2026 00:52:14 +0000 (0:00:01.176) 0:02:34.673 ******** 2026-01-02 00:52:59.120710 | orchestrator | changed: [testbed-node-0] => { 2026-01-02 00:52:59.120715 | orchestrator |  "msg": "Notifying handlers" 2026-01-02 00:52:59.120721 | orchestrator | } 2026-01-02 00:52:59.120730 | orchestrator | changed: [testbed-node-1] => { 2026-01-02 00:52:59.120737 | orchestrator |  "msg": "Notifying handlers" 2026-01-02 00:52:59.120743 | orchestrator | } 2026-01-02 00:52:59.120749 | orchestrator | changed: [testbed-node-2] => { 2026-01-02 00:52:59.120755 | orchestrator |  "msg": "Notifying handlers" 2026-01-02 00:52:59.120761 | orchestrator | } 2026-01-02 00:52:59.120766 | orchestrator | 2026-01-02 00:52:59.120773 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-01-02 00:52:59.120779 | orchestrator | Friday 02 January 2026 00:52:14 +0000 (0:00:00.519) 0:02:35.193 ******** 2026-01-02 00:52:59.120801 | orchestrator | 2026-01-02 00:52:59.120808 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-01-02 00:52:59.120814 | orchestrator | Friday 02 January 2026 00:52:15 +0000 (0:00:00.063) 0:02:35.256 ******** 2026-01-02 00:52:59.120821 | orchestrator | 2026-01-02 00:52:59.120827 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-01-02 00:52:59.120834 | orchestrator | Friday 02 January 2026 00:52:15 +0000 (0:00:00.061) 0:02:35.317 ******** 2026-01-02 00:52:59.120840 | orchestrator | 2026-01-02 00:52:59.120846 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-01-02 00:52:59.120854 | orchestrator | Friday 02 January 2026 00:52:15 +0000 (0:00:00.058) 0:02:35.375 ******** 2026-01-02 00:52:59.120858 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:52:59.120862 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:52:59.120866 | orchestrator | 2026-01-02 00:52:59.120870 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-01-02 00:52:59.120873 | orchestrator | Friday 02 January 2026 00:52:27 +0000 (0:00:11.957) 0:02:47.333 ******** 2026-01-02 00:52:59.120877 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:52:59.120881 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:52:59.120885 | orchestrator | 2026-01-02 00:52:59.120888 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db-relay container] ******************* 2026-01-02 00:52:59.120892 | orchestrator | Friday 02 January 2026 00:52:40 +0000 (0:00:12.994) 0:03:00.327 ******** 2026-01-02 00:52:59.120896 | orchestrator | changed: [testbed-node-2] => (item=1) 2026-01-02 00:52:59.120900 | orchestrator | changed: [testbed-node-1] => (item=1) 2026-01-02 00:52:59.120909 | orchestrator | changed: [testbed-node-0] => (item=1) 2026-01-02 00:52:59.120912 | orchestrator | 2026-01-02 00:52:59.120916 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-01-02 00:52:59.120920 | orchestrator | Friday 02 January 2026 00:52:52 +0000 (0:00:12.387) 0:03:12.714 ******** 2026-01-02 00:52:59.120924 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:52:59.120928 | orchestrator | 2026-01-02 00:52:59.120931 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-01-02 00:52:59.120935 | orchestrator | Friday 02 January 2026 00:52:52 +0000 (0:00:00.112) 0:03:12.827 ******** 2026-01-02 00:52:59.120939 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:52:59.120943 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:52:59.120947 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:52:59.120950 | orchestrator | 2026-01-02 00:52:59.120954 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-01-02 00:52:59.120958 | orchestrator | Friday 02 January 2026 00:52:53 +0000 (0:00:00.786) 0:03:13.614 ******** 2026-01-02 00:52:59.120962 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:52:59.120965 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:52:59.120969 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:52:59.120973 | orchestrator | 2026-01-02 00:52:59.120977 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-01-02 00:52:59.120981 | orchestrator | Friday 02 January 2026 00:52:54 +0000 (0:00:00.752) 0:03:14.366 ******** 2026-01-02 00:52:59.120984 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:52:59.120988 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:52:59.120992 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:52:59.120996 | orchestrator | 2026-01-02 00:52:59.120999 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-01-02 00:52:59.121007 | orchestrator | Friday 02 January 2026 00:52:54 +0000 (0:00:00.773) 0:03:15.140 ******** 2026-01-02 00:52:59.121011 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:52:59.121015 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:52:59.121019 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:52:59.121022 | orchestrator | 2026-01-02 00:52:59.121026 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-01-02 00:52:59.121030 | orchestrator | Friday 02 January 2026 00:52:55 +0000 (0:00:00.667) 0:03:15.808 ******** 2026-01-02 00:52:59.121034 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:52:59.121038 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:52:59.121041 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:52:59.121045 | orchestrator | 2026-01-02 00:52:59.121049 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-01-02 00:52:59.121053 | orchestrator | Friday 02 January 2026 00:52:56 +0000 (0:00:00.680) 0:03:16.488 ******** 2026-01-02 00:52:59.121056 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:52:59.121060 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:52:59.121064 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:52:59.121068 | orchestrator | 2026-01-02 00:52:59.121072 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db-relay] *************************************** 2026-01-02 00:52:59.121076 | orchestrator | Friday 02 January 2026 00:52:57 +0000 (0:00:00.802) 0:03:17.291 ******** 2026-01-02 00:52:59.121079 | orchestrator | ok: [testbed-node-0] => (item=1) 2026-01-02 00:52:59.121083 | orchestrator | ok: [testbed-node-1] => (item=1) 2026-01-02 00:52:59.121087 | orchestrator | ok: [testbed-node-2] => (item=1) 2026-01-02 00:52:59.121090 | orchestrator | 2026-01-02 00:52:59.121094 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-02 00:52:59.121098 | orchestrator | testbed-node-0 : ok=65  changed=29  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-01-02 00:52:59.121106 | orchestrator | testbed-node-1 : ok=63  changed=30  unreachable=0 failed=0 skipped=23  rescued=0 ignored=0 2026-01-02 00:52:59.121110 | orchestrator | testbed-node-2 : ok=63  changed=30  unreachable=0 failed=0 skipped=23  rescued=0 ignored=0 2026-01-02 00:52:59.121118 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-02 00:52:59.121122 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-02 00:52:59.121125 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-02 00:52:59.121129 | orchestrator | 2026-01-02 00:52:59.121133 | orchestrator | 2026-01-02 00:52:59.121137 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-02 00:52:59.121140 | orchestrator | Friday 02 January 2026 00:52:58 +0000 (0:00:01.054) 0:03:18.346 ******** 2026-01-02 00:52:59.121144 | orchestrator | =============================================================================== 2026-01-02 00:52:59.121148 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 24.90s 2026-01-02 00:52:59.121152 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 22.77s 2026-01-02 00:52:59.121156 | orchestrator | ovn-db : Restart ovn-sb-db container ----------------------------------- 21.98s 2026-01-02 00:52:59.121159 | orchestrator | ovn-db : Restart ovn-sb-db-relay container ----------------------------- 20.17s 2026-01-02 00:52:59.121163 | orchestrator | ovn-db : Restart ovn-northd container ---------------------------------- 13.41s 2026-01-02 00:52:59.121167 | orchestrator | ovn-controller : Restart ovn-controller container ----------------------- 8.90s 2026-01-02 00:52:59.121171 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 5.39s 2026-01-02 00:52:59.121174 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 5.16s 2026-01-02 00:52:59.121178 | orchestrator | service-check-containers : ovn_db | Check containers -------------------- 5.00s 2026-01-02 00:52:59.121182 | orchestrator | service-check-containers : ovn_db | Check containers -------------------- 3.89s 2026-01-02 00:52:59.121186 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 3.74s 2026-01-02 00:52:59.121189 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 3.26s 2026-01-02 00:52:59.121193 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 2.82s 2026-01-02 00:52:59.121197 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.22s 2026-01-02 00:52:59.121201 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.16s 2026-01-02 00:52:59.121204 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 2.15s 2026-01-02 00:52:59.121208 | orchestrator | ovn-db : Copying over config.json files for OVN relay services ---------- 1.91s 2026-01-02 00:52:59.121213 | orchestrator | ovn-db : Generate config files for OVN relay services ------------------- 1.89s 2026-01-02 00:52:59.121220 | orchestrator | ovn-db : Generate config files for OVN relay services ------------------- 1.84s 2026-01-02 00:52:59.121225 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 1.76s 2026-01-02 00:52:59.121231 | orchestrator | 2026-01-02 00:52:59 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:53:02.156252 | orchestrator | 2026-01-02 00:53:02 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:53:02.156703 | orchestrator | 2026-01-02 00:53:02 | INFO  | Task ae7cb1e4-503c-476f-9517-35d3597afcab is in state STARTED 2026-01-02 00:53:02.156819 | orchestrator | 2026-01-02 00:53:02 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:53:05.229165 | orchestrator | 2026-01-02 00:53:05 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:53:05.229279 | orchestrator | 2026-01-02 00:53:05 | INFO  | Task ae7cb1e4-503c-476f-9517-35d3597afcab is in state STARTED 2026-01-02 00:53:05.229322 | orchestrator | 2026-01-02 00:53:05 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:53:08.278287 | orchestrator | 2026-01-02 00:53:08 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:53:08.279287 | orchestrator | 2026-01-02 00:53:08 | INFO  | Task ae7cb1e4-503c-476f-9517-35d3597afcab is in state STARTED 2026-01-02 00:53:08.279328 | orchestrator | 2026-01-02 00:53:08 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:53:11.325646 | orchestrator | 2026-01-02 00:53:11 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:53:11.327500 | orchestrator | 2026-01-02 00:53:11 | INFO  | Task ae7cb1e4-503c-476f-9517-35d3597afcab is in state STARTED 2026-01-02 00:53:11.327859 | orchestrator | 2026-01-02 00:53:11 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:53:14.380405 | orchestrator | 2026-01-02 00:53:14 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:53:14.380506 | orchestrator | 2026-01-02 00:53:14 | INFO  | Task ae7cb1e4-503c-476f-9517-35d3597afcab is in state STARTED 2026-01-02 00:53:14.380527 | orchestrator | 2026-01-02 00:53:14 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:53:17.431824 | orchestrator | 2026-01-02 00:53:17 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:53:17.432635 | orchestrator | 2026-01-02 00:53:17 | INFO  | Task ae7cb1e4-503c-476f-9517-35d3597afcab is in state STARTED 2026-01-02 00:53:17.432669 | orchestrator | 2026-01-02 00:53:17 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:53:20.472716 | orchestrator | 2026-01-02 00:53:20 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:53:20.473925 | orchestrator | 2026-01-02 00:53:20 | INFO  | Task ae7cb1e4-503c-476f-9517-35d3597afcab is in state STARTED 2026-01-02 00:53:20.473976 | orchestrator | 2026-01-02 00:53:20 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:53:23.526673 | orchestrator | 2026-01-02 00:53:23 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:53:23.528306 | orchestrator | 2026-01-02 00:53:23 | INFO  | Task ae7cb1e4-503c-476f-9517-35d3597afcab is in state STARTED 2026-01-02 00:53:23.528610 | orchestrator | 2026-01-02 00:53:23 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:53:26.587840 | orchestrator | 2026-01-02 00:53:26 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:53:26.589845 | orchestrator | 2026-01-02 00:53:26 | INFO  | Task ae7cb1e4-503c-476f-9517-35d3597afcab is in state STARTED 2026-01-02 00:53:26.589885 | orchestrator | 2026-01-02 00:53:26 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:53:29.629520 | orchestrator | 2026-01-02 00:53:29 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:53:29.630870 | orchestrator | 2026-01-02 00:53:29 | INFO  | Task ae7cb1e4-503c-476f-9517-35d3597afcab is in state STARTED 2026-01-02 00:53:29.631159 | orchestrator | 2026-01-02 00:53:29 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:53:32.666961 | orchestrator | 2026-01-02 00:53:32 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:53:32.667586 | orchestrator | 2026-01-02 00:53:32 | INFO  | Task ae7cb1e4-503c-476f-9517-35d3597afcab is in state STARTED 2026-01-02 00:53:32.667877 | orchestrator | 2026-01-02 00:53:32 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:53:35.727306 | orchestrator | 2026-01-02 00:53:35 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:53:35.728304 | orchestrator | 2026-01-02 00:53:35 | INFO  | Task ae7cb1e4-503c-476f-9517-35d3597afcab is in state STARTED 2026-01-02 00:53:35.728517 | orchestrator | 2026-01-02 00:53:35 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:53:38.775713 | orchestrator | 2026-01-02 00:53:38 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:53:38.776537 | orchestrator | 2026-01-02 00:53:38 | INFO  | Task ae7cb1e4-503c-476f-9517-35d3597afcab is in state STARTED 2026-01-02 00:53:38.776576 | orchestrator | 2026-01-02 00:53:38 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:53:41.831682 | orchestrator | 2026-01-02 00:53:41 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:53:41.832256 | orchestrator | 2026-01-02 00:53:41 | INFO  | Task ae7cb1e4-503c-476f-9517-35d3597afcab is in state STARTED 2026-01-02 00:53:41.832295 | orchestrator | 2026-01-02 00:53:41 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:53:44.882421 | orchestrator | 2026-01-02 00:53:44 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:53:44.884698 | orchestrator | 2026-01-02 00:53:44 | INFO  | Task ae7cb1e4-503c-476f-9517-35d3597afcab is in state STARTED 2026-01-02 00:53:44.884789 | orchestrator | 2026-01-02 00:53:44 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:53:47.927539 | orchestrator | 2026-01-02 00:53:47 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:53:47.927639 | orchestrator | 2026-01-02 00:53:47 | INFO  | Task ae7cb1e4-503c-476f-9517-35d3597afcab is in state STARTED 2026-01-02 00:53:47.930236 | orchestrator | 2026-01-02 00:53:47 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:53:50.982194 | orchestrator | 2026-01-02 00:53:50 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:53:50.985224 | orchestrator | 2026-01-02 00:53:50 | INFO  | Task ae7cb1e4-503c-476f-9517-35d3597afcab is in state STARTED 2026-01-02 00:53:50.985345 | orchestrator | 2026-01-02 00:53:50 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:53:54.026010 | orchestrator | 2026-01-02 00:53:54 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:53:54.026321 | orchestrator | 2026-01-02 00:53:54 | INFO  | Task ae7cb1e4-503c-476f-9517-35d3597afcab is in state STARTED 2026-01-02 00:53:54.026345 | orchestrator | 2026-01-02 00:53:54 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:53:57.070233 | orchestrator | 2026-01-02 00:53:57 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:53:57.072177 | orchestrator | 2026-01-02 00:53:57 | INFO  | Task ae7cb1e4-503c-476f-9517-35d3597afcab is in state STARTED 2026-01-02 00:53:57.072664 | orchestrator | 2026-01-02 00:53:57 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:54:00.126673 | orchestrator | 2026-01-02 00:54:00 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:54:00.127614 | orchestrator | 2026-01-02 00:54:00 | INFO  | Task ae7cb1e4-503c-476f-9517-35d3597afcab is in state STARTED 2026-01-02 00:54:00.127637 | orchestrator | 2026-01-02 00:54:00 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:54:03.163237 | orchestrator | 2026-01-02 00:54:03 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:54:03.164509 | orchestrator | 2026-01-02 00:54:03 | INFO  | Task ae7cb1e4-503c-476f-9517-35d3597afcab is in state STARTED 2026-01-02 00:54:03.164545 | orchestrator | 2026-01-02 00:54:03 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:54:06.223994 | orchestrator | 2026-01-02 00:54:06 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:54:06.226350 | orchestrator | 2026-01-02 00:54:06 | INFO  | Task ae7cb1e4-503c-476f-9517-35d3597afcab is in state STARTED 2026-01-02 00:54:06.226982 | orchestrator | 2026-01-02 00:54:06 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:54:09.276444 | orchestrator | 2026-01-02 00:54:09 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:54:09.276554 | orchestrator | 2026-01-02 00:54:09 | INFO  | Task ae7cb1e4-503c-476f-9517-35d3597afcab is in state STARTED 2026-01-02 00:54:09.276570 | orchestrator | 2026-01-02 00:54:09 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:54:12.309384 | orchestrator | 2026-01-02 00:54:12 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:54:12.310972 | orchestrator | 2026-01-02 00:54:12 | INFO  | Task ae7cb1e4-503c-476f-9517-35d3597afcab is in state STARTED 2026-01-02 00:54:12.310992 | orchestrator | 2026-01-02 00:54:12 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:54:15.359500 | orchestrator | 2026-01-02 00:54:15 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:54:15.360834 | orchestrator | 2026-01-02 00:54:15 | INFO  | Task ae7cb1e4-503c-476f-9517-35d3597afcab is in state STARTED 2026-01-02 00:54:15.361440 | orchestrator | 2026-01-02 00:54:15 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:54:18.413080 | orchestrator | 2026-01-02 00:54:18 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:54:18.415393 | orchestrator | 2026-01-02 00:54:18 | INFO  | Task ae7cb1e4-503c-476f-9517-35d3597afcab is in state STARTED 2026-01-02 00:54:18.415539 | orchestrator | 2026-01-02 00:54:18 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:54:21.473831 | orchestrator | 2026-01-02 00:54:21 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:54:21.474187 | orchestrator | 2026-01-02 00:54:21 | INFO  | Task ae7cb1e4-503c-476f-9517-35d3597afcab is in state STARTED 2026-01-02 00:54:21.474393 | orchestrator | 2026-01-02 00:54:21 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:54:24.522204 | orchestrator | 2026-01-02 00:54:24 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:54:24.524501 | orchestrator | 2026-01-02 00:54:24 | INFO  | Task ae7cb1e4-503c-476f-9517-35d3597afcab is in state STARTED 2026-01-02 00:54:24.524587 | orchestrator | 2026-01-02 00:54:24 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:54:27.574215 | orchestrator | 2026-01-02 00:54:27 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:54:27.577886 | orchestrator | 2026-01-02 00:54:27 | INFO  | Task ae7cb1e4-503c-476f-9517-35d3597afcab is in state STARTED 2026-01-02 00:54:27.577958 | orchestrator | 2026-01-02 00:54:27 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:54:30.621439 | orchestrator | 2026-01-02 00:54:30 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:54:30.621662 | orchestrator | 2026-01-02 00:54:30 | INFO  | Task ae7cb1e4-503c-476f-9517-35d3597afcab is in state STARTED 2026-01-02 00:54:30.621728 | orchestrator | 2026-01-02 00:54:30 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:54:33.671527 | orchestrator | 2026-01-02 00:54:33 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:54:33.673501 | orchestrator | 2026-01-02 00:54:33 | INFO  | Task ae7cb1e4-503c-476f-9517-35d3597afcab is in state STARTED 2026-01-02 00:54:33.673668 | orchestrator | 2026-01-02 00:54:33 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:54:36.710232 | orchestrator | 2026-01-02 00:54:36 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:54:36.710298 | orchestrator | 2026-01-02 00:54:36 | INFO  | Task ae7cb1e4-503c-476f-9517-35d3597afcab is in state STARTED 2026-01-02 00:54:36.710307 | orchestrator | 2026-01-02 00:54:36 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:54:39.750428 | orchestrator | 2026-01-02 00:54:39 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:54:39.751928 | orchestrator | 2026-01-02 00:54:39 | INFO  | Task ae7cb1e4-503c-476f-9517-35d3597afcab is in state STARTED 2026-01-02 00:54:39.751986 | orchestrator | 2026-01-02 00:54:39 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:54:42.790262 | orchestrator | 2026-01-02 00:54:42 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:54:42.791516 | orchestrator | 2026-01-02 00:54:42 | INFO  | Task ae7cb1e4-503c-476f-9517-35d3597afcab is in state STARTED 2026-01-02 00:54:42.791560 | orchestrator | 2026-01-02 00:54:42 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:54:45.833619 | orchestrator | 2026-01-02 00:54:45 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:54:45.834083 | orchestrator | 2026-01-02 00:54:45 | INFO  | Task ae7cb1e4-503c-476f-9517-35d3597afcab is in state STARTED 2026-01-02 00:54:45.834098 | orchestrator | 2026-01-02 00:54:45 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:54:48.882966 | orchestrator | 2026-01-02 00:54:48 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:54:48.884609 | orchestrator | 2026-01-02 00:54:48 | INFO  | Task ae7cb1e4-503c-476f-9517-35d3597afcab is in state STARTED 2026-01-02 00:54:48.884692 | orchestrator | 2026-01-02 00:54:48 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:54:51.926429 | orchestrator | 2026-01-02 00:54:51 | INFO  | Task cb2d8017-1882-4a99-a5a8-bd31076669d9 is in state STARTED 2026-01-02 00:54:51.927249 | orchestrator | 2026-01-02 00:54:51 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:54:51.932514 | orchestrator | 2026-01-02 00:54:51 | INFO  | Task ae7cb1e4-503c-476f-9517-35d3597afcab is in state SUCCESS 2026-01-02 00:54:51.934186 | orchestrator | 2026-01-02 00:54:51.934229 | orchestrator | 2026-01-02 00:54:51.934243 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-02 00:54:51.934256 | orchestrator | 2026-01-02 00:54:51.934267 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-02 00:54:51.934279 | orchestrator | Friday 02 January 2026 00:48:28 +0000 (0:00:00.314) 0:00:00.314 ******** 2026-01-02 00:54:51.934291 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:54:51.934303 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:54:51.934314 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:54:51.934325 | orchestrator | 2026-01-02 00:54:51.934337 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-02 00:54:51.934349 | orchestrator | Friday 02 January 2026 00:48:28 +0000 (0:00:00.473) 0:00:00.788 ******** 2026-01-02 00:54:51.934361 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2026-01-02 00:54:51.934372 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2026-01-02 00:54:51.934383 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2026-01-02 00:54:51.934394 | orchestrator | 2026-01-02 00:54:51.934405 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2026-01-02 00:54:51.934416 | orchestrator | 2026-01-02 00:54:51.934427 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-01-02 00:54:51.934597 | orchestrator | Friday 02 January 2026 00:48:29 +0000 (0:00:00.710) 0:00:01.498 ******** 2026-01-02 00:54:51.934613 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:54:51.934625 | orchestrator | 2026-01-02 00:54:51.934636 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2026-01-02 00:54:51.934647 | orchestrator | Friday 02 January 2026 00:48:29 +0000 (0:00:00.749) 0:00:02.248 ******** 2026-01-02 00:54:51.934685 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:54:51.934748 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:54:51.934762 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:54:51.934773 | orchestrator | 2026-01-02 00:54:51.934788 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-01-02 00:54:51.934801 | orchestrator | Friday 02 January 2026 00:48:30 +0000 (0:00:00.802) 0:00:03.051 ******** 2026-01-02 00:54:51.934814 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:54:51.934826 | orchestrator | 2026-01-02 00:54:51.934839 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2026-01-02 00:54:51.934852 | orchestrator | Friday 02 January 2026 00:48:32 +0000 (0:00:01.415) 0:00:04.466 ******** 2026-01-02 00:54:51.934864 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:54:51.934876 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:54:51.934889 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:54:51.934900 | orchestrator | 2026-01-02 00:54:51.934912 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2026-01-02 00:54:51.934925 | orchestrator | Friday 02 January 2026 00:48:32 +0000 (0:00:00.803) 0:00:05.269 ******** 2026-01-02 00:54:51.934937 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-01-02 00:54:51.934950 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-01-02 00:54:51.934962 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-01-02 00:54:51.934975 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-01-02 00:54:51.934988 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-01-02 00:54:51.935000 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-01-02 00:54:51.935012 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-01-02 00:54:51.935026 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-01-02 00:54:51.935038 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-01-02 00:54:51.935051 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-01-02 00:54:51.935064 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-01-02 00:54:51.935077 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-01-02 00:54:51.935089 | orchestrator | 2026-01-02 00:54:51.935101 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-01-02 00:54:51.935115 | orchestrator | Friday 02 January 2026 00:48:36 +0000 (0:00:03.512) 0:00:08.782 ******** 2026-01-02 00:54:51.935128 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-01-02 00:54:51.935141 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-01-02 00:54:51.935233 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-01-02 00:54:51.935245 | orchestrator | 2026-01-02 00:54:51.935256 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-01-02 00:54:51.935268 | orchestrator | Friday 02 January 2026 00:48:37 +0000 (0:00:00.933) 0:00:09.715 ******** 2026-01-02 00:54:51.935279 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-01-02 00:54:51.935300 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-01-02 00:54:51.935311 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-01-02 00:54:51.935322 | orchestrator | 2026-01-02 00:54:51.935332 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-01-02 00:54:51.935343 | orchestrator | Friday 02 January 2026 00:48:38 +0000 (0:00:01.473) 0:00:11.189 ******** 2026-01-02 00:54:51.935354 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2026-01-02 00:54:51.935366 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:54:51.935391 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2026-01-02 00:54:51.935404 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:54:51.935422 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2026-01-02 00:54:51.935495 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:54:51.935516 | orchestrator | 2026-01-02 00:54:51.935533 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2026-01-02 00:54:51.935554 | orchestrator | Friday 02 January 2026 00:48:39 +0000 (0:00:00.743) 0:00:11.932 ******** 2026-01-02 00:54:51.935578 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-01-02 00:54:51.935609 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-01-02 00:54:51.935708 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-01-02 00:54:51.935726 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-02 00:54:51.935738 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-02 00:54:51.935815 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-02 00:54:51.935830 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-02 00:54:51.935982 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-02 00:54:51.936001 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-02 00:54:51.936013 | orchestrator | 2026-01-02 00:54:51.936024 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2026-01-02 00:54:51.936039 | orchestrator | Friday 02 January 2026 00:48:41 +0000 (0:00:02.303) 0:00:14.236 ******** 2026-01-02 00:54:51.936058 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:54:51.936070 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:54:51.936081 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:54:51.936092 | orchestrator | 2026-01-02 00:54:51.936103 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2026-01-02 00:54:51.936114 | orchestrator | Friday 02 January 2026 00:48:43 +0000 (0:00:01.768) 0:00:16.005 ******** 2026-01-02 00:54:51.936125 | orchestrator | changed: [testbed-node-1] => (item=users) 2026-01-02 00:54:51.936135 | orchestrator | changed: [testbed-node-2] => (item=users) 2026-01-02 00:54:51.936146 | orchestrator | changed: [testbed-node-0] => (item=users) 2026-01-02 00:54:51.936157 | orchestrator | changed: [testbed-node-1] => (item=rules) 2026-01-02 00:54:51.936167 | orchestrator | changed: [testbed-node-0] => (item=rules) 2026-01-02 00:54:51.936178 | orchestrator | changed: [testbed-node-2] => (item=rules) 2026-01-02 00:54:51.936189 | orchestrator | 2026-01-02 00:54:51.936200 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2026-01-02 00:54:51.936211 | orchestrator | Friday 02 January 2026 00:48:45 +0000 (0:00:02.143) 0:00:18.149 ******** 2026-01-02 00:54:51.936250 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:54:51.936270 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:54:51.936281 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:54:51.936292 | orchestrator | 2026-01-02 00:54:51.936303 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2026-01-02 00:54:51.936314 | orchestrator | Friday 02 January 2026 00:48:47 +0000 (0:00:01.461) 0:00:19.610 ******** 2026-01-02 00:54:51.936350 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:54:51.936361 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:54:51.936372 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:54:51.936383 | orchestrator | 2026-01-02 00:54:51.936394 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2026-01-02 00:54:51.936405 | orchestrator | Friday 02 January 2026 00:48:49 +0000 (0:00:01.891) 0:00:21.502 ******** 2026-01-02 00:54:51.936416 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-02 00:54:51.936437 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-02 00:54:51.936450 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-02 00:54:51.936469 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2025.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__e9fc08771d87a568a18ea0cbd34077ffe9694e60', '__omit_place_holder__e9fc08771d87a568a18ea0cbd34077ffe9694e60'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-01-02 00:54:51.936481 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:54:51.936492 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-02 00:54:51.936512 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-02 00:54:51.936579 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-02 00:54:51.936591 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-02 00:54:51.936613 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-02 00:54:51.936625 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-02 00:54:51.936642 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2025.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__e9fc08771d87a568a18ea0cbd34077ffe9694e60', '__omit_place_holder__e9fc08771d87a568a18ea0cbd34077ffe9694e60'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-01-02 00:54:51.936653 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:54:51.936724 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2025.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__e9fc08771d87a568a18ea0cbd34077ffe9694e60', '__omit_place_holder__e9fc08771d87a568a18ea0cbd34077ffe9694e60'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-01-02 00:54:51.936760 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:54:51.936770 | orchestrator | 2026-01-02 00:54:51.936779 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2026-01-02 00:54:51.936789 | orchestrator | Friday 02 January 2026 00:48:50 +0000 (0:00:01.053) 0:00:22.556 ******** 2026-01-02 00:54:51.936799 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-01-02 00:54:51.936810 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-01-02 00:54:51.936828 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-01-02 00:54:51.936839 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-02 00:54:51.936853 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-02 00:54:51.936864 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2025.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__e9fc08771d87a568a18ea0cbd34077ffe9694e60', '__omit_place_holder__e9fc08771d87a568a18ea0cbd34077ffe9694e60'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-01-02 00:54:51.936881 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-02 00:54:51.936891 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-02 00:54:51.936901 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2025.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__e9fc08771d87a568a18ea0cbd34077ffe9694e60', '__omit_place_holder__e9fc08771d87a568a18ea0cbd34077ffe9694e60'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-01-02 00:54:51.936917 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-02 00:54:51.936928 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-02 00:54:51.936944 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2025.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__e9fc08771d87a568a18ea0cbd34077ffe9694e60', '__omit_place_holder__e9fc08771d87a568a18ea0cbd34077ffe9694e60'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-01-02 00:54:51.936966 | orchestrator | 2026-01-02 00:54:51.936976 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2026-01-02 00:54:51.936986 | orchestrator | Friday 02 January 2026 00:48:54 +0000 (0:00:03.862) 0:00:26.418 ******** 2026-01-02 00:54:51.936996 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-01-02 00:54:51.937006 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-01-02 00:54:51.937016 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-01-02 00:54:51.937077 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-02 00:54:51.937094 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-02 00:54:51.937117 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-02 00:54:51.937145 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-02 00:54:51.937162 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-02 00:54:51.937180 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-02 00:54:51.937197 | orchestrator | 2026-01-02 00:54:51.937214 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2026-01-02 00:54:51.937225 | orchestrator | Friday 02 January 2026 00:48:57 +0000 (0:00:03.247) 0:00:29.666 ******** 2026-01-02 00:54:51.937235 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-01-02 00:54:51.937245 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-01-02 00:54:51.937255 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-01-02 00:54:51.937264 | orchestrator | 2026-01-02 00:54:51.937274 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2026-01-02 00:54:51.937284 | orchestrator | Friday 02 January 2026 00:49:00 +0000 (0:00:02.740) 0:00:32.407 ******** 2026-01-02 00:54:51.937293 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-01-02 00:54:51.937302 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-01-02 00:54:51.937312 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-01-02 00:54:51.937322 | orchestrator | 2026-01-02 00:54:51.938403 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2026-01-02 00:54:51.938495 | orchestrator | Friday 02 January 2026 00:49:05 +0000 (0:00:05.305) 0:00:37.712 ******** 2026-01-02 00:54:51.938512 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:54:51.938529 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:54:51.938547 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:54:51.938565 | orchestrator | 2026-01-02 00:54:51.938583 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2026-01-02 00:54:51.938638 | orchestrator | Friday 02 January 2026 00:49:06 +0000 (0:00:00.790) 0:00:38.503 ******** 2026-01-02 00:54:51.938712 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-01-02 00:54:51.938736 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-01-02 00:54:51.938755 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-01-02 00:54:51.938775 | orchestrator | 2026-01-02 00:54:51.938794 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2026-01-02 00:54:51.938813 | orchestrator | Friday 02 January 2026 00:49:08 +0000 (0:00:02.751) 0:00:41.255 ******** 2026-01-02 00:54:51.938825 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-01-02 00:54:51.938837 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-01-02 00:54:51.938861 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-01-02 00:54:51.938873 | orchestrator | 2026-01-02 00:54:51.938883 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-01-02 00:54:51.938894 | orchestrator | Friday 02 January 2026 00:49:11 +0000 (0:00:02.338) 0:00:43.593 ******** 2026-01-02 00:54:51.938919 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:54:51.938931 | orchestrator | 2026-01-02 00:54:51.938942 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2026-01-02 00:54:51.938953 | orchestrator | Friday 02 January 2026 00:49:11 +0000 (0:00:00.495) 0:00:44.089 ******** 2026-01-02 00:54:51.938964 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2026-01-02 00:54:51.938975 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2026-01-02 00:54:51.938987 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2026-01-02 00:54:51.939158 | orchestrator | 2026-01-02 00:54:51.939171 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2026-01-02 00:54:51.939182 | orchestrator | Friday 02 January 2026 00:49:14 +0000 (0:00:02.541) 0:00:46.631 ******** 2026-01-02 00:54:51.939193 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2026-01-02 00:54:51.939204 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2026-01-02 00:54:51.939215 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2026-01-02 00:54:51.939226 | orchestrator | 2026-01-02 00:54:51.939236 | orchestrator | TASK [loadbalancer : Copying over proxysql-cert.pem] *************************** 2026-01-02 00:54:51.939248 | orchestrator | Friday 02 January 2026 00:49:16 +0000 (0:00:02.123) 0:00:48.755 ******** 2026-01-02 00:54:51.939258 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:54:51.939270 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:54:51.939280 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:54:51.939291 | orchestrator | 2026-01-02 00:54:51.939302 | orchestrator | TASK [loadbalancer : Copying over proxysql-key.pem] **************************** 2026-01-02 00:54:51.939313 | orchestrator | Friday 02 January 2026 00:49:16 +0000 (0:00:00.252) 0:00:49.007 ******** 2026-01-02 00:54:51.939324 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:54:51.939334 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:54:51.939345 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:54:51.939356 | orchestrator | 2026-01-02 00:54:51.939367 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-01-02 00:54:51.939378 | orchestrator | Friday 02 January 2026 00:49:16 +0000 (0:00:00.248) 0:00:49.255 ******** 2026-01-02 00:54:51.939392 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-01-02 00:54:51.939454 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-01-02 00:54:51.939468 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-01-02 00:54:51.939522 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-02 00:54:51.939538 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-02 00:54:51.939550 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-02 00:54:51.939562 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-02 00:54:51.939583 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-02 00:54:51.939608 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-02 00:54:51.939630 | orchestrator | 2026-01-02 00:54:51.939687 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-01-02 00:54:51.939811 | orchestrator | Friday 02 January 2026 00:49:20 +0000 (0:00:03.619) 0:00:52.875 ******** 2026-01-02 00:54:51.939824 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-02 00:54:51.939843 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-02 00:54:51.939940 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-02 00:54:51.939952 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:54:51.939964 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-02 00:54:51.939988 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-02 00:54:51.940000 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-02 00:54:51.940012 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:54:51.940032 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-02 00:54:51.940044 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-02 00:54:51.940073 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-02 00:54:51.940085 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:54:51.940096 | orchestrator | 2026-01-02 00:54:51.940108 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-01-02 00:54:51.940156 | orchestrator | Friday 02 January 2026 00:49:21 +0000 (0:00:00.870) 0:00:53.745 ******** 2026-01-02 00:54:51.940170 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-02 00:54:51.940190 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-02 00:54:51.940201 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-02 00:54:51.940212 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:54:51.940231 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-02 00:54:51.940244 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-02 00:54:51.940260 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-02 00:54:51.940272 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:54:51.940283 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-02 00:54:51.940294 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-02 00:54:51.940312 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-02 00:54:51.940324 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:54:51.940335 | orchestrator | 2026-01-02 00:54:51.940346 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2026-01-02 00:54:51.940357 | orchestrator | Friday 02 January 2026 00:49:22 +0000 (0:00:01.077) 0:00:54.823 ******** 2026-01-02 00:54:51.940368 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-01-02 00:54:51.940379 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-01-02 00:54:51.940390 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-01-02 00:54:51.940401 | orchestrator | 2026-01-02 00:54:51.940412 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2026-01-02 00:54:51.940423 | orchestrator | Friday 02 January 2026 00:49:24 +0000 (0:00:01.461) 0:00:56.285 ******** 2026-01-02 00:54:51.940433 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-01-02 00:54:51.940450 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-01-02 00:54:51.940462 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-01-02 00:54:51.940472 | orchestrator | 2026-01-02 00:54:51.940483 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2026-01-02 00:54:51.940494 | orchestrator | Friday 02 January 2026 00:49:25 +0000 (0:00:01.729) 0:00:58.015 ******** 2026-01-02 00:54:51.940505 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-01-02 00:54:51.940516 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-01-02 00:54:51.940527 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:54:51.940538 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-01-02 00:54:51.940548 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-01-02 00:54:51.940559 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-01-02 00:54:51.940570 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:54:51.940581 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-01-02 00:54:51.940591 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:54:51.940602 | orchestrator | 2026-01-02 00:54:51.940613 | orchestrator | TASK [service-check-containers : loadbalancer | Check containers] ************** 2026-01-02 00:54:51.940624 | orchestrator | Friday 02 January 2026 00:49:26 +0000 (0:00:01.238) 0:00:59.253 ******** 2026-01-02 00:54:51.940641 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-01-02 00:54:51.940683 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-01-02 00:54:51.940696 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-01-02 00:54:51.940708 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-02 00:54:51.940727 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-02 00:54:51.940740 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-02 00:54:51.940757 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-02 00:54:51.940780 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-02 00:54:51.940792 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-02 00:54:51.940803 | orchestrator | 2026-01-02 00:54:51.940815 | orchestrator | TASK [service-check-containers : loadbalancer | Notify handlers to restart containers] *** 2026-01-02 00:54:51.940826 | orchestrator | Friday 02 January 2026 00:49:30 +0000 (0:00:03.207) 0:01:02.461 ******** 2026-01-02 00:54:51.940837 | orchestrator | changed: [testbed-node-0] => { 2026-01-02 00:54:51.940847 | orchestrator |  "msg": "Notifying handlers" 2026-01-02 00:54:51.940859 | orchestrator | } 2026-01-02 00:54:51.940869 | orchestrator | changed: [testbed-node-1] => { 2026-01-02 00:54:51.940880 | orchestrator |  "msg": "Notifying handlers" 2026-01-02 00:54:51.940891 | orchestrator | } 2026-01-02 00:54:51.940902 | orchestrator | changed: [testbed-node-2] => { 2026-01-02 00:54:51.940912 | orchestrator |  "msg": "Notifying handlers" 2026-01-02 00:54:51.940923 | orchestrator | } 2026-01-02 00:54:51.940934 | orchestrator | 2026-01-02 00:54:51.940945 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-01-02 00:54:51.940955 | orchestrator | Friday 02 January 2026 00:49:30 +0000 (0:00:00.298) 0:01:02.760 ******** 2026-01-02 00:54:51.940967 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-02 00:54:51.940988 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-02 00:54:51.941000 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-02 00:54:51.941018 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:54:51.941035 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-02 00:54:51.941048 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-02 00:54:51.941059 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-02 00:54:51.941071 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:54:51.941082 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-02 00:54:51.941094 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-02 00:54:51.941113 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-02 00:54:51.941125 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:54:51.941145 | orchestrator | 2026-01-02 00:54:51.941157 | orchestrator | TASK [include_role : aodh] ***************************************************** 2026-01-02 00:54:51.941167 | orchestrator | Friday 02 January 2026 00:49:31 +0000 (0:00:01.154) 0:01:03.915 ******** 2026-01-02 00:54:51.941178 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:54:51.941189 | orchestrator | 2026-01-02 00:54:51.941200 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2026-01-02 00:54:51.941211 | orchestrator | Friday 02 January 2026 00:49:32 +0000 (0:00:00.558) 0:01:04.473 ******** 2026-01-02 00:54:51.941230 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2025.1', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-02 00:54:51.941245 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2025.1', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-01-02 00:54:51.941258 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2025.1', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-01-02 00:54:51.941269 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2025.1', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-01-02 00:54:51.941288 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2025.1', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-02 00:54:51.941307 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2025.1', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-01-02 00:54:51.941323 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2025.1', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-01-02 00:54:51.941335 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2025.1', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-01-02 00:54:51.941346 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2025.1', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-02 00:54:51.941358 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2025.1', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-01-02 00:54:51.941376 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2025.1', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-01-02 00:54:51.941395 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2025.1', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-01-02 00:54:51.941406 | orchestrator | 2026-01-02 00:54:51.941418 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2026-01-02 00:54:51.941429 | orchestrator | Friday 02 January 2026 00:49:35 +0000 (0:00:03.293) 0:01:07.766 ******** 2026-01-02 00:54:51.941445 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2025.1', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-01-02 00:54:51.941457 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2025.1', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-01-02 00:54:51.941469 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2025.1', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-01-02 00:54:51.941480 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2025.1', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-01-02 00:54:51.941491 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:54:51.941515 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2025.1', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-01-02 00:54:51.941527 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2025.1', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-01-02 00:54:51.941544 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2025.1', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-01-02 00:54:51.941556 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2025.1', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-01-02 00:54:51.941567 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:54:51.941579 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2025.1', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-01-02 00:54:51.941590 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2025.1', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-01-02 00:54:51.941616 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2025.1', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-01-02 00:54:51.941628 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2025.1', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-01-02 00:54:51.941639 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:54:51.941650 | orchestrator | 2026-01-02 00:54:51.941714 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2026-01-02 00:54:51.941726 | orchestrator | Friday 02 January 2026 00:49:36 +0000 (0:00:00.798) 0:01:08.565 ******** 2026-01-02 00:54:51.941744 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-01-02 00:54:51.941757 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-01-02 00:54:51.941769 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:54:51.941780 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-01-02 00:54:51.941792 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-01-02 00:54:51.941803 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:54:51.941813 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-01-02 00:54:51.941824 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-01-02 00:54:51.941836 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:54:51.941847 | orchestrator | 2026-01-02 00:54:51.941858 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2026-01-02 00:54:51.941869 | orchestrator | Friday 02 January 2026 00:49:37 +0000 (0:00:01.017) 0:01:09.582 ******** 2026-01-02 00:54:51.941888 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:54:51.941899 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:54:51.941909 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:54:51.941920 | orchestrator | 2026-01-02 00:54:51.941931 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2026-01-02 00:54:51.941942 | orchestrator | Friday 02 January 2026 00:49:38 +0000 (0:00:01.303) 0:01:10.886 ******** 2026-01-02 00:54:51.941953 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:54:51.941963 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:54:51.941974 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:54:51.941984 | orchestrator | 2026-01-02 00:54:51.941995 | orchestrator | TASK [include_role : barbican] ************************************************* 2026-01-02 00:54:51.942006 | orchestrator | Friday 02 January 2026 00:49:40 +0000 (0:00:01.752) 0:01:12.638 ******** 2026-01-02 00:54:51.942057 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:54:51.942075 | orchestrator | 2026-01-02 00:54:51.942094 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2026-01-02 00:54:51.942114 | orchestrator | Friday 02 January 2026 00:49:41 +0000 (0:00:00.732) 0:01:13.370 ******** 2026-01-02 00:54:51.942145 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-02 00:54:51.942168 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-02 00:54:51.942189 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-02 00:54:51.942209 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-02 00:54:51.942287 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-02 00:54:51.942321 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-02 00:54:51.942340 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-02 00:54:51.942357 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-02 00:54:51.942368 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-02 00:54:51.942387 | orchestrator | 2026-01-02 00:54:51.942397 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2026-01-02 00:54:51.942408 | orchestrator | Friday 02 January 2026 00:49:44 +0000 (0:00:03.626) 0:01:16.997 ******** 2026-01-02 00:54:51.942418 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-02 00:54:51.942435 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-02 00:54:51.942447 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-02 00:54:51.942462 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-02 00:54:51.942473 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:54:51.942483 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-02 00:54:51.942500 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-02 00:54:51.942510 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:54:51.942526 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-02 00:54:51.942537 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-02 00:54:51.942552 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-02 00:54:51.942563 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:54:51.942572 | orchestrator | 2026-01-02 00:54:51.942582 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2026-01-02 00:54:51.942592 | orchestrator | Friday 02 January 2026 00:49:45 +0000 (0:00:01.032) 0:01:18.030 ******** 2026-01-02 00:54:51.942603 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-02 00:54:51.942619 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-02 00:54:51.942631 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-02 00:54:51.942641 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:54:51.942651 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-02 00:54:51.942689 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:54:51.942700 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-02 00:54:51.942710 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-02 00:54:51.942720 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:54:51.942730 | orchestrator | 2026-01-02 00:54:51.942740 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2026-01-02 00:54:51.942750 | orchestrator | Friday 02 January 2026 00:49:47 +0000 (0:00:01.303) 0:01:19.334 ******** 2026-01-02 00:54:51.942760 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:54:51.942769 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:54:51.942781 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:54:51.942798 | orchestrator | 2026-01-02 00:54:51.942810 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2026-01-02 00:54:51.942820 | orchestrator | Friday 02 January 2026 00:49:48 +0000 (0:00:01.439) 0:01:20.773 ******** 2026-01-02 00:54:51.942829 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:54:51.942839 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:54:51.942848 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:54:51.942858 | orchestrator | 2026-01-02 00:54:51.942867 | orchestrator | TASK [include_role : blazar] *************************************************** 2026-01-02 00:54:51.942877 | orchestrator | Friday 02 January 2026 00:49:50 +0000 (0:00:02.330) 0:01:23.103 ******** 2026-01-02 00:54:51.942887 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:54:51.942897 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:54:51.942906 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:54:51.942916 | orchestrator | 2026-01-02 00:54:51.942943 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2026-01-02 00:54:51.942953 | orchestrator | Friday 02 January 2026 00:49:51 +0000 (0:00:00.390) 0:01:23.494 ******** 2026-01-02 00:54:51.942963 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:54:51.942972 | orchestrator | 2026-01-02 00:54:51.942982 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2026-01-02 00:54:51.942992 | orchestrator | Friday 02 January 2026 00:49:52 +0000 (0:00:00.976) 0:01:24.470 ******** 2026-01-02 00:54:51.943002 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-01-02 00:54:51.943038 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-01-02 00:54:51.943049 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-01-02 00:54:51.943059 | orchestrator | 2026-01-02 00:54:51.943069 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2026-01-02 00:54:51.943281 | orchestrator | Friday 02 January 2026 00:49:55 +0000 (0:00:02.937) 0:01:27.407 ******** 2026-01-02 00:54:51.943306 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-01-02 00:54:51.943324 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:54:51.943354 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-01-02 00:54:51.943432 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:54:51.943460 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-01-02 00:54:51.943479 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:54:51.943496 | orchestrator | 2026-01-02 00:54:51.943512 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2026-01-02 00:54:51.943530 | orchestrator | Friday 02 January 2026 00:49:58 +0000 (0:00:03.429) 0:01:30.836 ******** 2026-01-02 00:54:51.943541 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-01-02 00:54:51.943552 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-01-02 00:54:51.943563 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-01-02 00:54:51.943574 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-01-02 00:54:51.943585 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:54:51.943594 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:54:51.943605 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-01-02 00:54:51.943624 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-01-02 00:54:51.943642 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:54:51.943652 | orchestrator | 2026-01-02 00:54:51.943720 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2026-01-02 00:54:51.943731 | orchestrator | Friday 02 January 2026 00:50:00 +0000 (0:00:02.171) 0:01:33.008 ******** 2026-01-02 00:54:51.943741 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:54:51.943750 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:54:51.943760 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:54:51.943770 | orchestrator | 2026-01-02 00:54:51.943781 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2026-01-02 00:54:51.943791 | orchestrator | Friday 02 January 2026 00:50:01 +0000 (0:00:00.340) 0:01:33.348 ******** 2026-01-02 00:54:51.943801 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:54:51.943811 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:54:51.943820 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:54:51.943830 | orchestrator | 2026-01-02 00:54:51.943840 | orchestrator | TASK [include_role : cinder] *************************************************** 2026-01-02 00:54:51.943850 | orchestrator | Friday 02 January 2026 00:50:02 +0000 (0:00:01.020) 0:01:34.369 ******** 2026-01-02 00:54:51.943860 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:54:51.943869 | orchestrator | 2026-01-02 00:54:51.943879 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2026-01-02 00:54:51.943889 | orchestrator | Friday 02 January 2026 00:50:02 +0000 (0:00:00.682) 0:01:35.051 ******** 2026-01-02 00:54:51.943907 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-02 00:54:51.943920 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-02 00:54:51.943931 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-02 00:54:51.943958 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-02 00:54:51.943970 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-02 00:54:51.943985 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-02 00:54:51.943995 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-02 00:54:51.944006 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-02 00:54:51.944022 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-02 00:54:51.944044 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-02 00:54:51.944059 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-02 00:54:51.944070 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-02 00:54:51.944079 | orchestrator | 2026-01-02 00:54:51.944089 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2026-01-02 00:54:51.944099 | orchestrator | Friday 02 January 2026 00:50:05 +0000 (0:00:03.114) 0:01:38.166 ******** 2026-01-02 00:54:51.944110 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-02 00:54:51.944127 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-02 00:54:51.944143 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-02 00:54:51.944158 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-02 00:54:51.944169 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:54:51.944180 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-02 00:54:51.944191 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-02 00:54:51.944208 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-02 00:54:51.944224 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-02 00:54:51.944235 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:54:51.944249 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-02 00:54:51.944260 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-02 00:54:51.944270 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-02 00:54:51.944285 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-02 00:54:51.944293 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:54:51.944301 | orchestrator | 2026-01-02 00:54:51.944310 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2026-01-02 00:54:51.944318 | orchestrator | Friday 02 January 2026 00:50:06 +0000 (0:00:00.635) 0:01:38.801 ******** 2026-01-02 00:54:51.944327 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-02 00:54:51.944341 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-02 00:54:51.944349 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:54:51.944358 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-02 00:54:51.944366 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-02 00:54:51.944374 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:54:51.944382 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-02 00:54:51.944390 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-02 00:54:51.944398 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:54:51.944406 | orchestrator | 2026-01-02 00:54:51.944414 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2026-01-02 00:54:51.944426 | orchestrator | Friday 02 January 2026 00:50:07 +0000 (0:00:00.850) 0:01:39.651 ******** 2026-01-02 00:54:51.944434 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:54:51.944442 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:54:51.944450 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:54:51.944458 | orchestrator | 2026-01-02 00:54:51.944466 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2026-01-02 00:54:51.944474 | orchestrator | Friday 02 January 2026 00:50:08 +0000 (0:00:01.452) 0:01:41.104 ******** 2026-01-02 00:54:51.944482 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:54:51.944490 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:54:51.944498 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:54:51.944506 | orchestrator | 2026-01-02 00:54:51.944513 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2026-01-02 00:54:51.944522 | orchestrator | Friday 02 January 2026 00:50:10 +0000 (0:00:02.122) 0:01:43.227 ******** 2026-01-02 00:54:51.944535 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:54:51.944543 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:54:51.944551 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:54:51.944559 | orchestrator | 2026-01-02 00:54:51.944567 | orchestrator | TASK [include_role : cyborg] *************************************************** 2026-01-02 00:54:51.944575 | orchestrator | Friday 02 January 2026 00:50:11 +0000 (0:00:00.311) 0:01:43.538 ******** 2026-01-02 00:54:51.944583 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:54:51.944591 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:54:51.944600 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:54:51.944607 | orchestrator | 2026-01-02 00:54:51.944615 | orchestrator | TASK [include_role : designate] ************************************************ 2026-01-02 00:54:51.944623 | orchestrator | Friday 02 January 2026 00:50:11 +0000 (0:00:00.307) 0:01:43.845 ******** 2026-01-02 00:54:51.944631 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:54:51.944639 | orchestrator | 2026-01-02 00:54:51.944647 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2026-01-02 00:54:51.944671 | orchestrator | Friday 02 January 2026 00:50:12 +0000 (0:00:01.198) 0:01:45.044 ******** 2026-01-02 00:54:51.944680 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-02 00:54:51.944696 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-02 00:54:51.944705 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-02 00:54:51.944718 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-02 00:54:51.944731 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-02 00:54:51.944740 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-02 00:54:51.944748 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2025.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-01-02 00:54:51.944762 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-02 00:54:51.944770 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-02 00:54:51.944783 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-02 00:54:51.944796 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-02 00:54:51.944805 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-02 00:54:51.944813 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-02 00:54:51.944821 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2025.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-01-02 00:54:51.944835 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-02 00:54:51.944847 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-02 00:54:51.944861 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-02 00:54:51.944870 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-02 00:54:51.944878 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-02 00:54:51.944887 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-02 00:54:51.944900 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2025.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-01-02 00:54:51.944908 | orchestrator | 2026-01-02 00:54:51.944917 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2026-01-02 00:54:51.944925 | orchestrator | Friday 02 January 2026 00:50:17 +0000 (0:00:04.959) 0:01:50.004 ******** 2026-01-02 00:54:51.944940 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-01-02 00:54:51.944954 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-02 00:54:51.944963 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-02 00:54:51.944972 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-02 00:54:51.944980 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-02 00:54:51.944994 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-02 00:54:51.945003 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2025.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-01-02 00:54:51.945016 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:54:51.945029 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-01-02 00:54:51.945037 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-02 00:54:51.945045 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-02 00:54:51.945054 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-02 00:54:51.945067 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-02 00:54:51.945082 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-02 00:54:51.945094 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2025.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-01-02 00:54:51.945102 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:54:51.945111 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-01-02 00:54:51.945119 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-02 00:54:51.945128 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-02 00:54:51.945140 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-02 00:54:51.945154 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-02 00:54:51.945166 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-02 00:54:51.945174 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2025.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-01-02 00:54:51.945183 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:54:51.945191 | orchestrator | 2026-01-02 00:54:51.945199 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2026-01-02 00:54:51.945207 | orchestrator | Friday 02 January 2026 00:50:18 +0000 (0:00:01.013) 0:01:51.018 ******** 2026-01-02 00:54:51.945216 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-01-02 00:54:51.945225 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-01-02 00:54:51.945234 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:54:51.945242 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-01-02 00:54:51.945251 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-01-02 00:54:51.945259 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:54:51.945267 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-01-02 00:54:51.945276 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-01-02 00:54:51.945289 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:54:51.945297 | orchestrator | 2026-01-02 00:54:51.945309 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2026-01-02 00:54:51.945318 | orchestrator | Friday 02 January 2026 00:50:20 +0000 (0:00:01.849) 0:01:52.867 ******** 2026-01-02 00:54:51.945326 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:54:51.945334 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:54:51.945342 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:54:51.945350 | orchestrator | 2026-01-02 00:54:51.945358 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2026-01-02 00:54:51.945366 | orchestrator | Friday 02 January 2026 00:50:21 +0000 (0:00:01.133) 0:01:54.001 ******** 2026-01-02 00:54:51.945374 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:54:51.945381 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:54:51.945389 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:54:51.945397 | orchestrator | 2026-01-02 00:54:51.945405 | orchestrator | TASK [include_role : etcd] ***************************************************** 2026-01-02 00:54:51.945413 | orchestrator | Friday 02 January 2026 00:50:23 +0000 (0:00:01.816) 0:01:55.817 ******** 2026-01-02 00:54:51.945421 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:54:51.945429 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:54:51.945437 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:54:51.945445 | orchestrator | 2026-01-02 00:54:51.945453 | orchestrator | TASK [include_role : glance] *************************************************** 2026-01-02 00:54:51.945461 | orchestrator | Friday 02 January 2026 00:50:23 +0000 (0:00:00.291) 0:01:56.109 ******** 2026-01-02 00:54:51.945531 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:54:51.945540 | orchestrator | 2026-01-02 00:54:51.945548 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2026-01-02 00:54:51.945557 | orchestrator | Friday 02 January 2026 00:50:24 +0000 (0:00:00.885) 0:01:56.994 ******** 2026-01-02 00:54:51.945571 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-02 00:54:51.945967 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2025.1', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-01-02 00:54:51.946008 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-02 00:54:51.946056 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2025.1', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-01-02 00:54:51.946082 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-02 00:54:51.946093 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2025.1', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-01-02 00:54:51.946108 | orchestrator | 2026-01-02 00:54:51.946120 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2026-01-02 00:54:51.946129 | orchestrator | Friday 02 January 2026 00:50:29 +0000 (0:00:04.311) 0:02:01.305 ******** 2026-01-02 00:54:51.946174 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-01-02 00:54:51.946187 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2025.1', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-01-02 00:54:51.946202 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:54:51.946217 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-01-02 00:54:51.946271 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2025.1', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-01-02 00:54:51.946296 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:54:51.946478 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-01-02 00:54:51.946499 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2025.1', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-01-02 00:54:51.946516 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:54:51.946528 | orchestrator | 2026-01-02 00:54:51.946543 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2026-01-02 00:54:51.946557 | orchestrator | Friday 02 January 2026 00:50:32 +0000 (0:00:03.482) 0:02:04.788 ******** 2026-01-02 00:54:51.946572 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-01-02 00:54:51.946605 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-01-02 00:54:51.946620 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:54:51.946635 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-01-02 00:54:51.946723 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-01-02 00:54:51.946747 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:54:51.946762 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-01-02 00:54:51.946776 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-01-02 00:54:51.946798 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:54:51.946808 | orchestrator | 2026-01-02 00:54:51.946817 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2026-01-02 00:54:51.946827 | orchestrator | Friday 02 January 2026 00:50:35 +0000 (0:00:03.317) 0:02:08.106 ******** 2026-01-02 00:54:51.946837 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:54:51.946846 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:54:51.946854 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:54:51.946862 | orchestrator | 2026-01-02 00:54:51.946870 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2026-01-02 00:54:51.946878 | orchestrator | Friday 02 January 2026 00:50:37 +0000 (0:00:01.192) 0:02:09.298 ******** 2026-01-02 00:54:51.946884 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:54:51.946891 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:54:51.946897 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:54:51.946905 | orchestrator | 2026-01-02 00:54:51.946911 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2026-01-02 00:54:51.946918 | orchestrator | Friday 02 January 2026 00:50:38 +0000 (0:00:01.882) 0:02:11.181 ******** 2026-01-02 00:54:51.946925 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:54:51.946932 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:54:51.946938 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:54:51.946945 | orchestrator | 2026-01-02 00:54:51.946952 | orchestrator | TASK [include_role : grafana] ************************************************** 2026-01-02 00:54:51.946959 | orchestrator | Friday 02 January 2026 00:50:39 +0000 (0:00:00.256) 0:02:11.438 ******** 2026-01-02 00:54:51.946965 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:54:51.946972 | orchestrator | 2026-01-02 00:54:51.946979 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2026-01-02 00:54:51.946986 | orchestrator | Friday 02 January 2026 00:50:39 +0000 (0:00:00.742) 0:02:12.181 ******** 2026-01-02 00:54:51.947000 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-02 00:54:51.947008 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-02 00:54:51.947020 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-02 00:54:51.947033 | orchestrator | 2026-01-02 00:54:51.947040 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2026-01-02 00:54:51.947047 | orchestrator | Friday 02 January 2026 00:50:43 +0000 (0:00:03.292) 0:02:15.473 ******** 2026-01-02 00:54:51.947054 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-01-02 00:54:51.947061 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:54:51.947068 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-01-02 00:54:51.947075 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:54:51.947089 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-01-02 00:54:51.947096 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:54:51.947103 | orchestrator | 2026-01-02 00:54:51.947110 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2026-01-02 00:54:51.947117 | orchestrator | Friday 02 January 2026 00:50:43 +0000 (0:00:00.406) 0:02:15.880 ******** 2026-01-02 00:54:51.947124 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-01-02 00:54:51.947132 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-01-02 00:54:51.947147 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:54:51.947153 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-01-02 00:54:51.947163 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-01-02 00:54:51.947170 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:54:51.947177 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-01-02 00:54:51.947184 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-01-02 00:54:51.947200 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:54:51.947207 | orchestrator | 2026-01-02 00:54:51.947331 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2026-01-02 00:54:51.947339 | orchestrator | Friday 02 January 2026 00:50:44 +0000 (0:00:00.647) 0:02:16.528 ******** 2026-01-02 00:54:51.947346 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:54:51.947353 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:54:51.947360 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:54:51.947367 | orchestrator | 2026-01-02 00:54:51.947373 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2026-01-02 00:54:51.947380 | orchestrator | Friday 02 January 2026 00:50:45 +0000 (0:00:01.609) 0:02:18.137 ******** 2026-01-02 00:54:51.947387 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:54:51.947393 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:54:51.947400 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:54:51.947407 | orchestrator | 2026-01-02 00:54:51.947413 | orchestrator | TASK [include_role : heat] ***************************************************** 2026-01-02 00:54:51.947420 | orchestrator | Friday 02 January 2026 00:50:48 +0000 (0:00:02.157) 0:02:20.295 ******** 2026-01-02 00:54:51.947427 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:54:51.947433 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:54:51.947440 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:54:51.947447 | orchestrator | 2026-01-02 00:54:51.947453 | orchestrator | TASK [include_role : horizon] ************************************************** 2026-01-02 00:54:51.947460 | orchestrator | Friday 02 January 2026 00:50:48 +0000 (0:00:00.334) 0:02:20.629 ******** 2026-01-02 00:54:51.947467 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:54:51.947473 | orchestrator | 2026-01-02 00:54:51.947480 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2026-01-02 00:54:51.947507 | orchestrator | Friday 02 January 2026 00:50:49 +0000 (0:00:00.849) 0:02:21.479 ******** 2026-01-02 00:54:51.947527 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-02 00:54:51.947542 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-02 00:54:51.947560 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-02 00:54:51.947577 | orchestrator | 2026-01-02 00:54:51.947584 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2026-01-02 00:54:51.947591 | orchestrator | Friday 02 January 2026 00:50:53 +0000 (0:00:03.874) 0:02:25.353 ******** 2026-01-02 00:54:51.947602 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-02 00:54:51.947615 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:54:51.947627 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-02 00:54:51.947635 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:54:51.947646 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-02 00:54:51.947675 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:54:51.947682 | orchestrator | 2026-01-02 00:54:51.947689 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2026-01-02 00:54:51.947696 | orchestrator | Friday 02 January 2026 00:50:53 +0000 (0:00:00.909) 0:02:26.263 ******** 2026-01-02 00:54:51.947703 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-01-02 00:54:51.947715 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-01-02 00:54:51.947723 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-01-02 00:54:51.947732 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-01-02 00:54:51.947739 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-01-02 00:54:51.947746 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-01-02 00:54:51.947753 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:54:51.947760 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-01-02 00:54:51.947767 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-01-02 00:54:51.947774 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-01-02 00:54:51.947787 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-01-02 00:54:51.947794 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:54:51.947805 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-01-02 00:54:51.947812 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-01-02 00:54:51.947819 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-01-02 00:54:51.947826 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-01-02 00:54:51.947834 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-01-02 00:54:51.947844 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:54:51.947851 | orchestrator | 2026-01-02 00:54:51.947858 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2026-01-02 00:54:51.947865 | orchestrator | Friday 02 January 2026 00:50:54 +0000 (0:00:00.909) 0:02:27.172 ******** 2026-01-02 00:54:51.947871 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:54:51.947878 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:54:51.947885 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:54:51.947891 | orchestrator | 2026-01-02 00:54:51.947898 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2026-01-02 00:54:51.947905 | orchestrator | Friday 02 January 2026 00:50:56 +0000 (0:00:01.923) 0:02:29.096 ******** 2026-01-02 00:54:51.947912 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:54:51.947919 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:54:51.947925 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:54:51.947932 | orchestrator | 2026-01-02 00:54:51.947939 | orchestrator | TASK [include_role : influxdb] ************************************************* 2026-01-02 00:54:51.947945 | orchestrator | Friday 02 January 2026 00:50:58 +0000 (0:00:02.064) 0:02:31.160 ******** 2026-01-02 00:54:51.947952 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:54:51.947959 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:54:51.947965 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:54:51.947972 | orchestrator | 2026-01-02 00:54:51.947979 | orchestrator | TASK [include_role : ironic] *************************************************** 2026-01-02 00:54:51.947986 | orchestrator | Friday 02 January 2026 00:50:59 +0000 (0:00:00.330) 0:02:31.491 ******** 2026-01-02 00:54:51.947993 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:54:51.947999 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:54:51.948006 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:54:51.948013 | orchestrator | 2026-01-02 00:54:51.948019 | orchestrator | TASK [include_role : keystone] ************************************************* 2026-01-02 00:54:51.948031 | orchestrator | Friday 02 January 2026 00:50:59 +0000 (0:00:00.312) 0:02:31.803 ******** 2026-01-02 00:54:51.948038 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:54:51.948045 | orchestrator | 2026-01-02 00:54:51.948052 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2026-01-02 00:54:51.948058 | orchestrator | Friday 02 January 2026 00:51:01 +0000 (0:00:01.548) 0:02:33.352 ******** 2026-01-02 00:54:51.948066 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-01-02 00:54:51.948078 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-02 00:54:51.948086 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-02 00:54:51.948097 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-01-02 00:54:51.948105 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-01-02 00:54:51.948118 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-02 00:54:51.948129 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-02 00:54:51.948136 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-02 00:54:51.948147 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-02 00:54:51.948155 | orchestrator | 2026-01-02 00:54:51.948162 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2026-01-02 00:54:51.948169 | orchestrator | Friday 02 January 2026 00:51:04 +0000 (0:00:03.512) 0:02:36.865 ******** 2026-01-02 00:54:51.948176 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-01-02 00:54:51.948188 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-02 00:54:51.948195 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-02 00:54:51.948203 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:54:51.948411 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-01-02 00:54:51.948428 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-02 00:54:51.948436 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-02 00:54:51.948448 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:54:51.948456 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-01-02 00:54:51.948463 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-02 00:54:51.948476 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-02 00:54:51.948483 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:54:51.948490 | orchestrator | 2026-01-02 00:54:51.948497 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2026-01-02 00:54:51.948504 | orchestrator | Friday 02 January 2026 00:51:05 +0000 (0:00:00.968) 0:02:37.833 ******** 2026-01-02 00:54:51.948512 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-01-02 00:54:51.948519 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-01-02 00:54:51.948527 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:54:51.948538 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-01-02 00:54:51.948552 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-01-02 00:54:51.948559 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:54:51.948566 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-01-02 00:54:51.948573 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-01-02 00:54:51.948580 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:54:51.948586 | orchestrator | 2026-01-02 00:54:51.948593 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2026-01-02 00:54:51.948600 | orchestrator | Friday 02 January 2026 00:51:06 +0000 (0:00:01.020) 0:02:38.854 ******** 2026-01-02 00:54:51.948607 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:54:51.948613 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:54:51.948620 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:54:51.948627 | orchestrator | 2026-01-02 00:54:51.948634 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2026-01-02 00:54:51.948640 | orchestrator | Friday 02 January 2026 00:51:07 +0000 (0:00:01.186) 0:02:40.040 ******** 2026-01-02 00:54:51.948647 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:54:51.948654 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:54:51.948679 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:54:51.948686 | orchestrator | 2026-01-02 00:54:51.948693 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2026-01-02 00:54:51.948699 | orchestrator | Friday 02 January 2026 00:51:09 +0000 (0:00:02.090) 0:02:42.130 ******** 2026-01-02 00:54:51.948706 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:54:51.948713 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:54:51.948720 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:54:51.948726 | orchestrator | 2026-01-02 00:54:51.948733 | orchestrator | TASK [include_role : magnum] *************************************************** 2026-01-02 00:54:51.948740 | orchestrator | Friday 02 January 2026 00:51:10 +0000 (0:00:00.276) 0:02:42.407 ******** 2026-01-02 00:54:51.948747 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:54:51.948753 | orchestrator | 2026-01-02 00:54:51.948760 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2026-01-02 00:54:51.948767 | orchestrator | Friday 02 January 2026 00:51:11 +0000 (0:00:01.070) 0:02:43.477 ******** 2026-01-02 00:54:51.948788 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-02 00:54:51.949336 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-02 00:54:51.949352 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-02 00:54:51.949361 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-02 00:54:51.949369 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-02 00:54:51.949384 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-02 00:54:51.949397 | orchestrator | 2026-01-02 00:54:51.949405 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2026-01-02 00:54:51.949412 | orchestrator | Friday 02 January 2026 00:51:14 +0000 (0:00:03.111) 0:02:46.588 ******** 2026-01-02 00:54:51.949423 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-01-02 00:54:51.949431 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-02 00:54:51.949438 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:54:51.949445 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-01-02 00:54:51.949459 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-02 00:54:51.949470 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:54:51.949481 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-01-02 00:54:51.949489 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-02 00:54:51.949496 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:54:51.949502 | orchestrator | 2026-01-02 00:54:51.949509 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2026-01-02 00:54:51.949516 | orchestrator | Friday 02 January 2026 00:51:14 +0000 (0:00:00.575) 0:02:47.164 ******** 2026-01-02 00:54:51.949524 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-01-02 00:54:51.949531 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-01-02 00:54:51.949538 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:54:51.949545 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-01-02 00:54:51.949552 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-01-02 00:54:51.949559 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:54:51.949566 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-01-02 00:54:51.949573 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-01-02 00:54:51.949584 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:54:51.949591 | orchestrator | 2026-01-02 00:54:51.949598 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2026-01-02 00:54:51.949605 | orchestrator | Friday 02 January 2026 00:51:15 +0000 (0:00:00.790) 0:02:47.955 ******** 2026-01-02 00:54:51.949615 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:54:51.949622 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:54:51.949629 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:54:51.949635 | orchestrator | 2026-01-02 00:54:51.949642 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2026-01-02 00:54:51.949649 | orchestrator | Friday 02 January 2026 00:51:17 +0000 (0:00:01.457) 0:02:49.412 ******** 2026-01-02 00:54:51.949707 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:54:51.949716 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:54:51.949723 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:54:51.949738 | orchestrator | 2026-01-02 00:54:51.949745 | orchestrator | TASK [include_role : manila] *************************************************** 2026-01-02 00:54:51.949752 | orchestrator | Friday 02 January 2026 00:51:18 +0000 (0:00:01.807) 0:02:51.220 ******** 2026-01-02 00:54:51.949883 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:54:51.949889 | orchestrator | 2026-01-02 00:54:51.949896 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2026-01-02 00:54:51.949903 | orchestrator | Friday 02 January 2026 00:51:20 +0000 (0:00:01.308) 0:02:52.529 ******** 2026-01-02 00:54:51.949942 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-02 00:54:51.949952 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-01-02 00:54:51.949959 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-01-02 00:54:51.950174 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-01-02 00:54:51.950212 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-02 00:54:51.950220 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-01-02 00:54:51.950232 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-01-02 00:54:51.950239 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-01-02 00:54:51.950246 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-02 00:54:51.950257 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-01-02 00:54:51.950278 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-01-02 00:54:51.950285 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-01-02 00:54:51.950292 | orchestrator | 2026-01-02 00:54:51.950299 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2026-01-02 00:54:51.950305 | orchestrator | Friday 02 January 2026 00:51:23 +0000 (0:00:03.695) 0:02:56.224 ******** 2026-01-02 00:54:51.950338 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-01-02 00:54:51.950348 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-01-02 00:54:51.950492 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-01-02 00:54:51.950504 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-01-02 00:54:51.950512 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:54:51.950557 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-01-02 00:54:51.950569 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-01-02 00:54:51.950576 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-01-02 00:54:51.950583 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-01-02 00:54:51.950597 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:54:51.950604 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-01-02 00:54:51.950616 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-01-02 00:54:51.950623 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-01-02 00:54:51.950633 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-01-02 00:54:51.950639 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:54:51.950646 | orchestrator | 2026-01-02 00:54:51.950652 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2026-01-02 00:54:51.950691 | orchestrator | Friday 02 January 2026 00:51:24 +0000 (0:00:00.802) 0:02:57.026 ******** 2026-01-02 00:54:51.950698 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-01-02 00:54:51.950705 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-01-02 00:54:51.950716 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:54:51.950722 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-01-02 00:54:51.950729 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-01-02 00:54:51.950735 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:54:51.950742 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-01-02 00:54:51.950755 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-01-02 00:54:51.950762 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:54:51.950768 | orchestrator | 2026-01-02 00:54:51.950774 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2026-01-02 00:54:51.950780 | orchestrator | Friday 02 January 2026 00:51:25 +0000 (0:00:01.063) 0:02:58.090 ******** 2026-01-02 00:54:51.950787 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:54:51.950793 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:54:51.950801 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:54:51.950809 | orchestrator | 2026-01-02 00:54:51.950816 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2026-01-02 00:54:51.950876 | orchestrator | Friday 02 January 2026 00:51:27 +0000 (0:00:01.465) 0:02:59.556 ******** 2026-01-02 00:54:51.950886 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:54:51.950893 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:54:51.950900 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:54:51.950907 | orchestrator | 2026-01-02 00:54:51.951186 | orchestrator | TASK [include_role : mariadb] ************************************************** 2026-01-02 00:54:51.951198 | orchestrator | Friday 02 January 2026 00:51:29 +0000 (0:00:02.061) 0:03:01.618 ******** 2026-01-02 00:54:51.951221 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:54:51.951228 | orchestrator | 2026-01-02 00:54:51.951234 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2026-01-02 00:54:51.951241 | orchestrator | Friday 02 January 2026 00:51:30 +0000 (0:00:01.209) 0:03:02.828 ******** 2026-01-02 00:54:51.951247 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-01-02 00:54:51.951253 | orchestrator | 2026-01-02 00:54:51.951259 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2026-01-02 00:54:51.951266 | orchestrator | Friday 02 January 2026 00:51:33 +0000 (0:00:02.831) 0:03:05.659 ******** 2026-01-02 00:54:51.951279 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-02 00:54:51.951293 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2025.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-01-02 00:54:51.951300 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:54:51.951321 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-02 00:54:51.951330 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2025.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-01-02 00:54:51.951341 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:54:51.951351 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-02 00:54:51.951359 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2025.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-01-02 00:54:51.951365 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:54:51.951371 | orchestrator | 2026-01-02 00:54:51.951378 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2026-01-02 00:54:51.951384 | orchestrator | Friday 02 January 2026 00:51:36 +0000 (0:00:03.286) 0:03:08.946 ******** 2026-01-02 00:54:51.951411 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-02 00:54:51.951484 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2025.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-01-02 00:54:51.951509 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:54:51.951516 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-02 00:54:51.951541 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2025.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-01-02 00:54:51.951549 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:54:51.951559 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-02 00:54:51.951691 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2025.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-01-02 00:54:51.951701 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:54:51.951822 | orchestrator | 2026-01-02 00:54:51.951830 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2026-01-02 00:54:51.951836 | orchestrator | Friday 02 January 2026 00:51:39 +0000 (0:00:02.712) 0:03:11.658 ******** 2026-01-02 00:54:51.951843 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-01-02 00:54:51.951866 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-01-02 00:54:51.951874 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:54:51.951880 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-01-02 00:54:51.951892 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-01-02 00:54:51.951899 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:54:51.951970 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-01-02 00:54:51.951987 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-01-02 00:54:51.951994 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:54:51.952000 | orchestrator | 2026-01-02 00:54:51.952006 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2026-01-02 00:54:51.952013 | orchestrator | Friday 02 January 2026 00:51:41 +0000 (0:00:02.257) 0:03:13.916 ******** 2026-01-02 00:54:51.952019 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:54:51.952025 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:54:51.952031 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:54:51.952038 | orchestrator | 2026-01-02 00:54:51.952044 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2026-01-02 00:54:51.952050 | orchestrator | Friday 02 January 2026 00:51:43 +0000 (0:00:01.865) 0:03:15.781 ******** 2026-01-02 00:54:51.952056 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:54:51.952062 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:54:51.952069 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:54:51.952075 | orchestrator | 2026-01-02 00:54:51.952081 | orchestrator | TASK [include_role : masakari] ************************************************* 2026-01-02 00:54:51.952096 | orchestrator | Friday 02 January 2026 00:51:44 +0000 (0:00:01.334) 0:03:17.116 ******** 2026-01-02 00:54:51.952103 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:54:51.952109 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:54:51.952173 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:54:51.952180 | orchestrator | 2026-01-02 00:54:51.952275 | orchestrator | TASK [include_role : memcached] ************************************************ 2026-01-02 00:54:51.952285 | orchestrator | Friday 02 January 2026 00:51:45 +0000 (0:00:00.279) 0:03:17.396 ******** 2026-01-02 00:54:51.952330 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:54:51.952338 | orchestrator | 2026-01-02 00:54:51.952344 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2026-01-02 00:54:51.952351 | orchestrator | Friday 02 January 2026 00:51:46 +0000 (0:00:01.160) 0:03:18.556 ******** 2026-01-02 00:54:51.952376 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-01-02 00:54:51.952388 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-01-02 00:54:51.952395 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-01-02 00:54:51.952402 | orchestrator | 2026-01-02 00:54:51.952408 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2026-01-02 00:54:51.952414 | orchestrator | Friday 02 January 2026 00:51:47 +0000 (0:00:01.569) 0:03:20.126 ******** 2026-01-02 00:54:51.952421 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-01-02 00:54:51.952427 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:54:51.952434 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-01-02 00:54:51.952445 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:54:51.952466 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-01-02 00:54:51.952475 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:54:51.952482 | orchestrator | 2026-01-02 00:54:51.952490 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2026-01-02 00:54:51.952497 | orchestrator | Friday 02 January 2026 00:51:48 +0000 (0:00:00.348) 0:03:20.474 ******** 2026-01-02 00:54:51.952504 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-01-02 00:54:51.952516 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-01-02 00:54:51.952524 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:54:51.952531 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:54:51.952539 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-01-02 00:54:51.952546 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:54:51.952553 | orchestrator | 2026-01-02 00:54:51.952560 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2026-01-02 00:54:51.952567 | orchestrator | Friday 02 January 2026 00:51:48 +0000 (0:00:00.744) 0:03:21.219 ******** 2026-01-02 00:54:51.952574 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:54:51.952582 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:54:51.952589 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:54:51.952596 | orchestrator | 2026-01-02 00:54:51.952603 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2026-01-02 00:54:51.952610 | orchestrator | Friday 02 January 2026 00:51:49 +0000 (0:00:00.382) 0:03:21.602 ******** 2026-01-02 00:54:51.952618 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:54:51.952625 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:54:51.952632 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:54:51.952639 | orchestrator | 2026-01-02 00:54:51.952646 | orchestrator | TASK [include_role : mistral] ************************************************** 2026-01-02 00:54:51.952652 | orchestrator | Friday 02 January 2026 00:51:50 +0000 (0:00:01.079) 0:03:22.681 ******** 2026-01-02 00:54:51.952675 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:54:51.952681 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:54:51.953374 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:54:51.953422 | orchestrator | 2026-01-02 00:54:51.953434 | orchestrator | TASK [include_role : neutron] ************************************************** 2026-01-02 00:54:51.953444 | orchestrator | Friday 02 January 2026 00:51:50 +0000 (0:00:00.273) 0:03:22.955 ******** 2026-01-02 00:54:51.953455 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:54:51.953466 | orchestrator | 2026-01-02 00:54:51.953476 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2026-01-02 00:54:51.953484 | orchestrator | Friday 02 January 2026 00:51:51 +0000 (0:00:01.241) 0:03:24.197 ******** 2026-01-02 00:54:51.953492 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-02 00:54:51.953532 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2025.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-01-02 00:54:51.953547 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-01-02 00:54:51.953554 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-01-02 00:54:51.953565 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-01-02 00:54:51.953574 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2025.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-01-02 00:54:51.953600 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-01-02 00:54:51.953613 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-01-02 00:54:51.953629 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-02 00:54:51.953637 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-02 00:54:51.953649 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-01-02 00:54:51.953704 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2025.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-01-02 00:54:51.953732 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-01-02 00:54:51.953790 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-01-02 00:54:51.953802 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-02 00:54:51.953919 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-01-02 00:54:51.953927 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2025.1', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-01-02 00:54:51.953947 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2025.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-01-02 00:54:51.954208 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2025.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-01-02 00:54:51.954219 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-01-02 00:54:51.954230 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2025.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-01-02 00:54:51.954236 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-01-02 00:54:51.954259 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-01-02 00:54:51.954269 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-01-02 00:54:51.954275 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2025.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-01-02 00:54:51.954285 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-01-02 00:54:51.954291 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2025.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-01-02 00:54:51.954297 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-01-02 00:54:51.954303 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-01-02 00:54:51.954322 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-01-02 00:54:51.954328 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-01-02 00:54:51.954336 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-02 00:54:51.954345 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-01-02 00:54:51.954351 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-01-02 00:54:51.954357 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-02 00:54:51.954376 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-01-02 00:54:51.954382 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-01-02 00:54:51.954399 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2025.1', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-01-02 00:54:51.954412 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2025.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-01-02 00:54:51.954418 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-01-02 00:54:51.954424 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2025.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-01-02 00:54:51.954443 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-01-02 00:54:51.954450 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2025.1', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-01-02 00:54:51.954485 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2025.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-01-02 00:54:51.954496 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2025.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-01-02 00:54:51.954502 | orchestrator | 2026-01-02 00:54:51.954508 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2026-01-02 00:54:51.954513 | orchestrator | Friday 02 January 2026 00:51:56 +0000 (0:00:04.578) 0:03:28.775 ******** 2026-01-02 00:54:51.954519 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-01-02 00:54:51.954878 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2025.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-01-02 00:54:51.955102 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-01-02 00:54:51.955129 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-01-02 00:54:51.955140 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-01-02 00:54:51.955151 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2025.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-01-02 00:54:51.955159 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-01-02 00:54:51.955230 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-01-02 00:54:51.955243 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-02 00:54:51.955254 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-01-02 00:54:51.955260 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-01-02 00:54:51.955266 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-01-02 00:54:51.955272 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2025.1', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-01-02 00:54:51.955330 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2025.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-01-02 00:54:51.955347 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2025.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-01-02 00:54:51.955471 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:54:51.955480 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-01-02 00:54:51.955487 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2025.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-01-02 00:54:51.955513 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-01-02 00:54:51.955521 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-01-02 00:54:51.955564 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2025.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-01-02 00:54:51.955571 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-01-02 00:54:51.955577 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-01-02 00:54:51.955598 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-01-02 00:54:51.955612 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-01-02 00:54:51.955619 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2025.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-01-02 00:54:51.955624 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-01-02 00:54:51.955630 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-01-02 00:54:51.955636 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2025.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-01-02 00:54:51.955642 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-01-02 00:54:51.955748 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-01-02 00:54:51.955763 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-02 00:54:51.955772 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-01-02 00:54:51.955778 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-01-02 00:54:51.955783 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-02 00:54:51.955806 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-01-02 00:54:51.955827 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-01-02 00:54:51.955837 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-01-02 00:54:51.955847 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-01-02 00:54:51.955854 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2025.1', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-01-02 00:54:51.955860 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-01-02 00:54:51.955867 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2025.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-01-02 00:54:51.955887 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2025.1', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-01-02 00:54:51.955898 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2025.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-01-02 00:54:51.955905 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:54:51.955914 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2025.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-01-02 00:54:51.955922 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2025.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-01-02 00:54:51.955929 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:54:51.955936 | orchestrator | 2026-01-02 00:54:51.955942 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2026-01-02 00:54:51.955949 | orchestrator | Friday 02 January 2026 00:51:58 +0000 (0:00:02.099) 0:03:30.874 ******** 2026-01-02 00:54:51.955956 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-01-02 00:54:51.955962 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-01-02 00:54:51.955975 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:54:51.955980 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-01-02 00:54:51.955991 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-01-02 00:54:51.955997 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:54:51.956002 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-01-02 00:54:51.956025 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-01-02 00:54:51.956032 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:54:51.956037 | orchestrator | 2026-01-02 00:54:51.956043 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2026-01-02 00:54:51.956049 | orchestrator | Friday 02 January 2026 00:52:00 +0000 (0:00:01.560) 0:03:32.435 ******** 2026-01-02 00:54:51.956054 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:54:51.956137 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:54:51.956144 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:54:51.956149 | orchestrator | 2026-01-02 00:54:51.956154 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2026-01-02 00:54:51.956160 | orchestrator | Friday 02 January 2026 00:52:01 +0000 (0:00:01.156) 0:03:33.592 ******** 2026-01-02 00:54:51.956165 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:54:51.956171 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:54:51.956176 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:54:51.956182 | orchestrator | 2026-01-02 00:54:51.956187 | orchestrator | TASK [include_role : placement] ************************************************ 2026-01-02 00:54:51.956193 | orchestrator | Friday 02 January 2026 00:52:03 +0000 (0:00:01.891) 0:03:35.484 ******** 2026-01-02 00:54:51.956198 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:54:51.956204 | orchestrator | 2026-01-02 00:54:51.956209 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2026-01-02 00:54:51.956215 | orchestrator | Friday 02 January 2026 00:52:04 +0000 (0:00:01.525) 0:03:37.009 ******** 2026-01-02 00:54:51.956224 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-01-02 00:54:51.956231 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-01-02 00:54:51.956435 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-01-02 00:54:51.956446 | orchestrator | 2026-01-02 00:54:51.956451 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2026-01-02 00:54:51.956457 | orchestrator | Friday 02 January 2026 00:52:08 +0000 (0:00:03.946) 0:03:40.956 ******** 2026-01-02 00:54:51.956467 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-01-02 00:54:51.956473 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:54:51.956479 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-01-02 00:54:51.956489 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:54:51.956495 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-01-02 00:54:51.956501 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:54:51.956506 | orchestrator | 2026-01-02 00:54:51.956512 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2026-01-02 00:54:51.956517 | orchestrator | Friday 02 January 2026 00:52:09 +0000 (0:00:00.486) 0:03:41.442 ******** 2026-01-02 00:54:51.956523 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-01-02 00:54:51.956545 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-01-02 00:54:51.956552 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:54:51.956557 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-01-02 00:54:51.956563 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-01-02 00:54:51.956569 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:54:51.956574 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-01-02 00:54:51.956586 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-01-02 00:54:51.956591 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:54:51.956597 | orchestrator | 2026-01-02 00:54:51.956602 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2026-01-02 00:54:51.956608 | orchestrator | Friday 02 January 2026 00:52:10 +0000 (0:00:01.021) 0:03:42.464 ******** 2026-01-02 00:54:51.956613 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:54:51.956619 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:54:51.956624 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:54:51.956629 | orchestrator | 2026-01-02 00:54:51.956635 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2026-01-02 00:54:51.956640 | orchestrator | Friday 02 January 2026 00:52:11 +0000 (0:00:01.291) 0:03:43.755 ******** 2026-01-02 00:54:51.956650 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:54:51.956671 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:54:51.956677 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:54:51.956683 | orchestrator | 2026-01-02 00:54:51.956688 | orchestrator | TASK [include_role : nova] ***************************************************** 2026-01-02 00:54:51.956694 | orchestrator | Friday 02 January 2026 00:52:13 +0000 (0:00:02.477) 0:03:46.232 ******** 2026-01-02 00:54:51.956699 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:54:51.956705 | orchestrator | 2026-01-02 00:54:51.956710 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2026-01-02 00:54:51.956719 | orchestrator | Friday 02 January 2026 00:52:15 +0000 (0:00:01.244) 0:03:47.476 ******** 2026-01-02 00:54:51.956725 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-02 00:54:51.956748 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-02 00:54:51.956766 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-02 00:54:51.956777 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-02 00:54:51.956784 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-02 00:54:51.956790 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2025.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-02 00:54:51.956811 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-02 00:54:51.956819 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-02 00:54:51.956828 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-02 00:54:51.956837 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2025.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-02 00:54:51.956911 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-02 00:54:51.956935 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2025.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-02 00:54:51.956942 | orchestrator | 2026-01-02 00:54:51.956948 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2026-01-02 00:54:51.956953 | orchestrator | Friday 02 January 2026 00:52:21 +0000 (0:00:05.831) 0:03:53.308 ******** 2026-01-02 00:54:51.956962 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-02 00:54:51.957061 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-02 00:54:51.957069 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-02 00:54:51.957075 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2025.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-02 00:54:51.957081 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:54:51.957127 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-02 00:54:51.957140 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-02 00:54:51.957150 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-02 00:54:51.957156 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2025.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-02 00:54:51.957162 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:54:51.957168 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-02 00:54:51.957191 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-02 00:54:51.957205 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-02 00:54:51.957212 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2025.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-02 00:54:51.957217 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:54:51.957223 | orchestrator | 2026-01-02 00:54:51.957228 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2026-01-02 00:54:51.957234 | orchestrator | Friday 02 January 2026 00:52:21 +0000 (0:00:00.662) 0:03:53.970 ******** 2026-01-02 00:54:51.957240 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-02 00:54:51.957246 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-02 00:54:51.957252 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-02 00:54:51.957257 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-02 00:54:51.957263 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:54:51.957268 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-02 00:54:51.957274 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-02 00:54:51.957280 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-02 00:54:51.957300 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-02 00:54:51.957306 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:54:51.957316 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-02 00:54:51.957321 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-02 00:54:51.957327 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-02 00:54:51.957335 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-02 00:54:51.957341 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:54:51.957346 | orchestrator | 2026-01-02 00:54:51.957352 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2026-01-02 00:54:51.957357 | orchestrator | Friday 02 January 2026 00:52:22 +0000 (0:00:00.938) 0:03:54.909 ******** 2026-01-02 00:54:51.957363 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:54:51.957368 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:54:51.957374 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:54:51.957379 | orchestrator | 2026-01-02 00:54:51.957385 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2026-01-02 00:54:51.957390 | orchestrator | Friday 02 January 2026 00:52:24 +0000 (0:00:01.412) 0:03:56.322 ******** 2026-01-02 00:54:51.957396 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:54:51.957425 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:54:51.957432 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:54:51.957437 | orchestrator | 2026-01-02 00:54:51.957442 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2026-01-02 00:54:51.957448 | orchestrator | Friday 02 January 2026 00:52:25 +0000 (0:00:01.847) 0:03:58.169 ******** 2026-01-02 00:54:51.957453 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:54:51.957459 | orchestrator | 2026-01-02 00:54:51.957464 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2026-01-02 00:54:51.957469 | orchestrator | Friday 02 January 2026 00:52:27 +0000 (0:00:01.333) 0:03:59.503 ******** 2026-01-02 00:54:51.957475 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2026-01-02 00:54:51.957481 | orchestrator | 2026-01-02 00:54:51.957486 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2026-01-02 00:54:51.957492 | orchestrator | Friday 02 January 2026 00:52:28 +0000 (0:00:01.237) 0:04:00.741 ******** 2026-01-02 00:54:51.957498 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-01-02 00:54:51.957503 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-01-02 00:54:51.957532 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-01-02 00:54:51.957539 | orchestrator | 2026-01-02 00:54:51.957544 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2026-01-02 00:54:51.957551 | orchestrator | Friday 02 January 2026 00:52:33 +0000 (0:00:05.432) 0:04:06.173 ******** 2026-01-02 00:54:51.957556 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-02 00:54:51.957562 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:54:51.957573 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-02 00:54:51.957579 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:54:51.957585 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-02 00:54:51.957590 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:54:51.957596 | orchestrator | 2026-01-02 00:54:51.957601 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2026-01-02 00:54:51.957607 | orchestrator | Friday 02 January 2026 00:52:34 +0000 (0:00:00.992) 0:04:07.165 ******** 2026-01-02 00:54:51.957612 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-01-02 00:54:51.957618 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-01-02 00:54:51.957624 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:54:51.957629 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-01-02 00:54:51.957635 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-01-02 00:54:51.957645 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-01-02 00:54:51.957650 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:54:51.957701 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-01-02 00:54:51.957708 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:54:51.957714 | orchestrator | 2026-01-02 00:54:51.957719 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-01-02 00:54:51.957725 | orchestrator | Friday 02 January 2026 00:52:36 +0000 (0:00:01.470) 0:04:08.636 ******** 2026-01-02 00:54:51.957730 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:54:51.957736 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:54:51.957741 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:54:51.957747 | orchestrator | 2026-01-02 00:54:51.957752 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-01-02 00:54:51.957778 | orchestrator | Friday 02 January 2026 00:52:38 +0000 (0:00:02.153) 0:04:10.789 ******** 2026-01-02 00:54:51.957786 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:54:51.957792 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:54:51.957798 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:54:51.957804 | orchestrator | 2026-01-02 00:54:51.957811 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2026-01-02 00:54:51.957817 | orchestrator | Friday 02 January 2026 00:52:41 +0000 (0:00:02.713) 0:04:13.503 ******** 2026-01-02 00:54:51.957823 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-1, testbed-node-2, testbed-node-0 => (item=nova-spicehtml5proxy) 2026-01-02 00:54:51.957830 | orchestrator | 2026-01-02 00:54:51.957836 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2026-01-02 00:54:51.957842 | orchestrator | Friday 02 January 2026 00:52:42 +0000 (0:00:01.071) 0:04:14.574 ******** 2026-01-02 00:54:51.957850 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-02 00:54:51.957857 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:54:51.957863 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-02 00:54:51.957921 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:54:51.957938 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-02 00:54:51.957950 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:54:51.957956 | orchestrator | 2026-01-02 00:54:51.957961 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2026-01-02 00:54:51.957967 | orchestrator | Friday 02 January 2026 00:52:43 +0000 (0:00:01.552) 0:04:16.126 ******** 2026-01-02 00:54:51.957973 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-02 00:54:51.957978 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:54:51.957984 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-02 00:54:51.957989 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:54:51.958037 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-02 00:54:51.958046 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:54:51.958051 | orchestrator | 2026-01-02 00:54:51.958057 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2026-01-02 00:54:51.958062 | orchestrator | Friday 02 January 2026 00:52:45 +0000 (0:00:01.920) 0:04:18.047 ******** 2026-01-02 00:54:51.958068 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:54:51.958073 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:54:51.958078 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:54:51.958084 | orchestrator | 2026-01-02 00:54:51.958089 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-01-02 00:54:51.958095 | orchestrator | Friday 02 January 2026 00:52:47 +0000 (0:00:01.520) 0:04:19.567 ******** 2026-01-02 00:54:51.958100 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:54:51.958106 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:54:51.958111 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:54:51.958117 | orchestrator | 2026-01-02 00:54:51.958122 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-01-02 00:54:51.958127 | orchestrator | Friday 02 January 2026 00:52:49 +0000 (0:00:02.081) 0:04:21.648 ******** 2026-01-02 00:54:51.958133 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:54:51.958138 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:54:51.958144 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:54:51.958149 | orchestrator | 2026-01-02 00:54:51.958155 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2026-01-02 00:54:51.958161 | orchestrator | Friday 02 January 2026 00:52:51 +0000 (0:00:02.355) 0:04:24.004 ******** 2026-01-02 00:54:51.958169 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2026-01-02 00:54:51.958183 | orchestrator | 2026-01-02 00:54:51.958189 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2026-01-02 00:54:51.958194 | orchestrator | Friday 02 January 2026 00:52:52 +0000 (0:00:00.627) 0:04:24.631 ******** 2026-01-02 00:54:51.958200 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-01-02 00:54:51.958206 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:54:51.958211 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-01-02 00:54:51.958217 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:54:51.958222 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-01-02 00:54:51.958228 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:54:51.958233 | orchestrator | 2026-01-02 00:54:51.958239 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2026-01-02 00:54:51.958244 | orchestrator | Friday 02 January 2026 00:52:53 +0000 (0:00:01.250) 0:04:25.881 ******** 2026-01-02 00:54:51.958249 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-01-02 00:54:51.958254 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:54:51.958275 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-01-02 00:54:51.958281 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:54:51.958286 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-01-02 00:54:51.958295 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:54:51.958300 | orchestrator | 2026-01-02 00:54:51.958305 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2026-01-02 00:54:51.958310 | orchestrator | Friday 02 January 2026 00:52:54 +0000 (0:00:00.944) 0:04:26.826 ******** 2026-01-02 00:54:51.958315 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:54:51.958320 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:54:51.958324 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:54:51.958329 | orchestrator | 2026-01-02 00:54:51.958336 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-01-02 00:54:51.958341 | orchestrator | Friday 02 January 2026 00:52:55 +0000 (0:00:01.344) 0:04:28.170 ******** 2026-01-02 00:54:51.958346 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:54:51.958351 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:54:51.958356 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:54:51.958361 | orchestrator | 2026-01-02 00:54:51.958365 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-01-02 00:54:51.958370 | orchestrator | Friday 02 January 2026 00:52:58 +0000 (0:00:02.260) 0:04:30.431 ******** 2026-01-02 00:54:51.958375 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:54:51.958380 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:54:51.958385 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:54:51.958389 | orchestrator | 2026-01-02 00:54:51.958394 | orchestrator | TASK [include_role : octavia] ************************************************** 2026-01-02 00:54:51.958399 | orchestrator | Friday 02 January 2026 00:53:01 +0000 (0:00:03.165) 0:04:33.596 ******** 2026-01-02 00:54:51.958404 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:54:51.958409 | orchestrator | 2026-01-02 00:54:51.958414 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2026-01-02 00:54:51.958418 | orchestrator | Friday 02 January 2026 00:53:02 +0000 (0:00:01.569) 0:04:35.165 ******** 2026-01-02 00:54:51.958424 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-02 00:54:51.958429 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-02 00:54:51.958450 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-02 00:54:51.958460 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-02 00:54:51.958468 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-02 00:54:51.958473 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-02 00:54:51.958478 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-02 00:54:51.958483 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-02 00:54:51.958505 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-02 00:54:51.958511 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-02 00:54:51.958519 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-02 00:54:51.958524 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-02 00:54:51.958529 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-02 00:54:51.958534 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-02 00:54:51.958540 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-02 00:54:51.958548 | orchestrator | 2026-01-02 00:54:51.958565 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2026-01-02 00:54:51.958571 | orchestrator | Friday 02 January 2026 00:53:06 +0000 (0:00:03.697) 0:04:38.862 ******** 2026-01-02 00:54:51.958576 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-02 00:54:51.958584 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-02 00:54:51.958589 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-02 00:54:51.958594 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-02 00:54:51.958599 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-02 00:54:51.958608 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:54:51.958626 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-02 00:54:51.958632 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-02 00:54:51.958640 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-02 00:54:51.958645 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-02 00:54:51.958650 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-02 00:54:51.958666 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:54:51.958672 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-02 00:54:51.958698 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-02 00:54:51.958704 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-02 00:54:51.958712 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-02 00:54:51.958717 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-02 00:54:51.958722 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:54:51.958727 | orchestrator | 2026-01-02 00:54:51.958732 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2026-01-02 00:54:51.958737 | orchestrator | Friday 02 January 2026 00:53:07 +0000 (0:00:01.092) 0:04:39.954 ******** 2026-01-02 00:54:51.958742 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-01-02 00:54:51.958747 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-01-02 00:54:51.958752 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:54:51.958757 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-01-02 00:54:51.958766 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-01-02 00:54:51.958771 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:54:51.958776 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-01-02 00:54:51.958781 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-01-02 00:54:51.958786 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:54:51.958791 | orchestrator | 2026-01-02 00:54:51.958796 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2026-01-02 00:54:51.958801 | orchestrator | Friday 02 January 2026 00:53:08 +0000 (0:00:01.305) 0:04:41.260 ******** 2026-01-02 00:54:51.958805 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:54:51.958810 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:54:51.958815 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:54:51.958820 | orchestrator | 2026-01-02 00:54:51.958824 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2026-01-02 00:54:51.958829 | orchestrator | Friday 02 January 2026 00:53:10 +0000 (0:00:01.308) 0:04:42.568 ******** 2026-01-02 00:54:51.958847 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:54:51.958853 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:54:51.958858 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:54:51.958862 | orchestrator | 2026-01-02 00:54:51.958867 | orchestrator | TASK [include_role : opensearch] *********************************************** 2026-01-02 00:54:51.958872 | orchestrator | Friday 02 January 2026 00:53:12 +0000 (0:00:02.273) 0:04:44.842 ******** 2026-01-02 00:54:51.958877 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:54:51.958882 | orchestrator | 2026-01-02 00:54:51.958887 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2026-01-02 00:54:51.958892 | orchestrator | Friday 02 January 2026 00:53:14 +0000 (0:00:01.691) 0:04:46.534 ******** 2026-01-02 00:54:51.958899 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-01-02 00:54:51.958906 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-01-02 00:54:51.958915 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-01-02 00:54:51.958933 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-01-02 00:54:51.958944 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-01-02 00:54:51.958950 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-01-02 00:54:51.958959 | orchestrator | 2026-01-02 00:54:51.958964 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2026-01-02 00:54:51.958969 | orchestrator | Friday 02 January 2026 00:53:19 +0000 (0:00:05.476) 0:04:52.010 ******** 2026-01-02 00:54:51.958974 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-01-02 00:54:51.958993 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-01-02 00:54:51.958999 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:54:51.959007 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-01-02 00:54:51.959013 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-01-02 00:54:51.959022 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:54:51.959027 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-01-02 00:54:51.959046 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-01-02 00:54:51.959052 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:54:51.959057 | orchestrator | 2026-01-02 00:54:51.959062 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2026-01-02 00:54:51.959067 | orchestrator | Friday 02 January 2026 00:53:20 +0000 (0:00:00.662) 0:04:52.673 ******** 2026-01-02 00:54:51.959072 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}})  2026-01-02 00:54:51.959079 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-01-02 00:54:51.959086 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-01-02 00:54:51.959094 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}})  2026-01-02 00:54:51.959099 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:54:51.959104 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-01-02 00:54:51.959109 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-01-02 00:54:51.959114 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:54:51.959119 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}})  2026-01-02 00:54:51.959124 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-01-02 00:54:51.959129 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-01-02 00:54:51.959134 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:54:51.959138 | orchestrator | 2026-01-02 00:54:51.959143 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2026-01-02 00:54:51.959148 | orchestrator | Friday 02 January 2026 00:53:22 +0000 (0:00:01.637) 0:04:54.311 ******** 2026-01-02 00:54:51.959153 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:54:51.959157 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:54:51.959162 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:54:51.959167 | orchestrator | 2026-01-02 00:54:51.959172 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2026-01-02 00:54:51.959176 | orchestrator | Friday 02 January 2026 00:53:22 +0000 (0:00:00.473) 0:04:54.785 ******** 2026-01-02 00:54:51.959194 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:54:51.959199 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:54:51.959204 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:54:51.959209 | orchestrator | 2026-01-02 00:54:51.959214 | orchestrator | TASK [include_role : prometheus] *********************************************** 2026-01-02 00:54:51.959219 | orchestrator | Friday 02 January 2026 00:53:23 +0000 (0:00:01.389) 0:04:56.175 ******** 2026-01-02 00:54:51.959224 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:54:51.959229 | orchestrator | 2026-01-02 00:54:51.959233 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2026-01-02 00:54:51.959238 | orchestrator | Friday 02 January 2026 00:53:25 +0000 (0:00:01.919) 0:04:58.094 ******** 2026-01-02 00:54:51.959246 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-01-02 00:54:51.959255 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-02 00:54:51.959261 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 00:54:51.959267 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 00:54:51.959272 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-02 00:54:51.959290 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-01-02 00:54:51.959300 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-02 00:54:51.959308 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 00:54:51.959313 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 00:54:51.959319 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-02 00:54:51.959324 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-01-02 00:54:51.959342 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-02 00:54:51.959348 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 00:54:51.959358 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 00:54:51.959366 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-02 00:54:51.959371 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-01-02 00:54:51.959377 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-01-02 00:54:51.959395 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 00:54:51.959405 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 00:54:51.959410 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-02 00:54:51.959418 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-01-02 00:54:51.959423 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-01-02 00:54:51.959428 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 00:54:51.959446 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 00:54:51.959455 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-02 00:54:51.959463 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-01-02 00:54:51.959469 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-01-02 00:54:51.959474 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 00:54:51.959479 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 00:54:51.959484 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-02 00:54:51.959493 | orchestrator | 2026-01-02 00:54:51.959510 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2026-01-02 00:54:51.959515 | orchestrator | Friday 02 January 2026 00:53:30 +0000 (0:00:04.475) 0:05:02.570 ******** 2026-01-02 00:54:51.959521 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-01-02 00:54:51.959529 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-02 00:54:51.959534 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 00:54:51.959539 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 00:54:51.959544 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-02 00:54:51.959562 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-01-02 00:54:51.959572 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-01-02 00:54:51.959580 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 00:54:51.959585 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 00:54:51.959590 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-02 00:54:51.959595 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:54:51.959601 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-01-02 00:54:51.959612 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-02 00:54:51.959617 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 00:54:51.959622 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 00:54:51.959630 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-02 00:54:51.959635 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-01-02 00:54:51.959640 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-01-02 00:54:51.959654 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-01-02 00:54:51.959676 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-02 00:54:51.959681 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 00:54:51.959686 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 00:54:51.959691 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 00:54:51.959696 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 00:54:51.959705 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-02 00:54:51.959710 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:54:51.959718 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-02 00:54:51.959728 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-01-02 00:54:51.959734 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-01-02 00:54:51.959739 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 00:54:51.959744 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 00:54:51.959752 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-02 00:54:51.959757 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:54:51.959762 | orchestrator | 2026-01-02 00:54:51.959767 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2026-01-02 00:54:51.959774 | orchestrator | Friday 02 January 2026 00:53:31 +0000 (0:00:00.861) 0:05:03.431 ******** 2026-01-02 00:54:51.959784 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-01-02 00:54:51.959793 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-01-02 00:54:51.959802 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-01-02 00:54:51.959812 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-01-02 00:54:51.959817 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:54:51.959822 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-01-02 00:54:51.959827 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-01-02 00:54:51.959833 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-01-02 00:54:51.959841 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-01-02 00:54:51.959846 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:54:51.959852 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-01-02 00:54:51.959857 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-01-02 00:54:51.959862 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-01-02 00:54:51.959869 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-01-02 00:54:51.959874 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:54:51.959880 | orchestrator | 2026-01-02 00:54:51.959884 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2026-01-02 00:54:51.959889 | orchestrator | Friday 02 January 2026 00:53:32 +0000 (0:00:01.001) 0:05:04.433 ******** 2026-01-02 00:54:51.959894 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:54:51.959899 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:54:51.959904 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:54:51.959909 | orchestrator | 2026-01-02 00:54:51.959914 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2026-01-02 00:54:51.959918 | orchestrator | Friday 02 January 2026 00:53:32 +0000 (0:00:00.848) 0:05:05.281 ******** 2026-01-02 00:54:51.959923 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:54:51.959928 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:54:51.959933 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:54:51.959938 | orchestrator | 2026-01-02 00:54:51.959942 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2026-01-02 00:54:51.959947 | orchestrator | Friday 02 January 2026 00:53:34 +0000 (0:00:01.318) 0:05:06.600 ******** 2026-01-02 00:54:51.959952 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:54:51.959957 | orchestrator | 2026-01-02 00:54:51.959962 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2026-01-02 00:54:51.959967 | orchestrator | Friday 02 January 2026 00:53:35 +0000 (0:00:01.414) 0:05:08.014 ******** 2026-01-02 00:54:51.959975 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-02 00:54:51.959984 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-02 00:54:51.959992 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-02 00:54:51.959998 | orchestrator | 2026-01-02 00:54:51.960003 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2026-01-02 00:54:51.960008 | orchestrator | Friday 02 January 2026 00:53:38 +0000 (0:00:02.712) 0:05:10.727 ******** 2026-01-02 00:54:51.960013 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-01-02 00:54:51.960018 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:54:51.960026 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-01-02 00:54:51.960035 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:54:51.960040 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-01-02 00:54:51.960045 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:54:51.960050 | orchestrator | 2026-01-02 00:54:51.960055 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2026-01-02 00:54:51.960060 | orchestrator | Friday 02 January 2026 00:53:39 +0000 (0:00:00.882) 0:05:11.609 ******** 2026-01-02 00:54:51.960065 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-01-02 00:54:51.960070 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:54:51.960075 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-01-02 00:54:51.960080 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:54:51.960085 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-01-02 00:54:51.960090 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:54:51.960095 | orchestrator | 2026-01-02 00:54:51.960102 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2026-01-02 00:54:51.960107 | orchestrator | Friday 02 January 2026 00:53:39 +0000 (0:00:00.620) 0:05:12.230 ******** 2026-01-02 00:54:51.960112 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:54:51.960116 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:54:51.960121 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:54:51.960126 | orchestrator | 2026-01-02 00:54:51.960131 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2026-01-02 00:54:51.960136 | orchestrator | Friday 02 January 2026 00:53:40 +0000 (0:00:00.480) 0:05:12.711 ******** 2026-01-02 00:54:51.960141 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:54:51.960146 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:54:51.960150 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:54:51.960155 | orchestrator | 2026-01-02 00:54:51.960160 | orchestrator | TASK [include_role : skyline] ************************************************** 2026-01-02 00:54:51.960165 | orchestrator | Friday 02 January 2026 00:53:41 +0000 (0:00:01.451) 0:05:14.162 ******** 2026-01-02 00:54:51.960173 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:54:51.960178 | orchestrator | 2026-01-02 00:54:51.960183 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2026-01-02 00:54:51.960187 | orchestrator | Friday 02 January 2026 00:53:43 +0000 (0:00:01.911) 0:05:16.074 ******** 2026-01-02 00:54:51.960195 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2025.1', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-01-02 00:54:51.960201 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2025.1', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-01-02 00:54:51.960206 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2025.1', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-01-02 00:54:51.960215 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2025.1', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-01-02 00:54:51.960227 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2025.1', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-01-02 00:54:51.960233 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2025.1', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-01-02 00:54:51.960238 | orchestrator | 2026-01-02 00:54:51.960243 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2026-01-02 00:54:51.960248 | orchestrator | Friday 02 January 2026 00:53:50 +0000 (0:00:06.330) 0:05:22.404 ******** 2026-01-02 00:54:51.960256 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2025.1', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-01-02 00:54:51.960262 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2025.1', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-01-02 00:54:51.960271 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:54:51.960279 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2025.1', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-01-02 00:54:51.960284 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2025.1', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-01-02 00:54:51.960290 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:54:51.960298 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2025.1', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-01-02 00:54:51.960306 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2025.1', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-01-02 00:54:51.960312 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:54:51.960317 | orchestrator | 2026-01-02 00:54:51.960326 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2026-01-02 00:54:51.960331 | orchestrator | Friday 02 January 2026 00:53:51 +0000 (0:00:01.073) 0:05:23.478 ******** 2026-01-02 00:54:51.960336 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-01-02 00:54:51.960341 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-01-02 00:54:51.960346 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-01-02 00:54:51.960351 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-01-02 00:54:51.960357 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:54:51.960361 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-01-02 00:54:51.960367 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-01-02 00:54:51.960371 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-01-02 00:54:51.960376 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-01-02 00:54:51.960381 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:54:51.960386 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-01-02 00:54:51.960398 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-01-02 00:54:51.960403 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-01-02 00:54:51.960408 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-01-02 00:54:51.960413 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:54:51.960417 | orchestrator | 2026-01-02 00:54:51.960423 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2026-01-02 00:54:51.960428 | orchestrator | Friday 02 January 2026 00:53:52 +0000 (0:00:01.498) 0:05:24.976 ******** 2026-01-02 00:54:51.960432 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:54:51.960437 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:54:51.960442 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:54:51.960447 | orchestrator | 2026-01-02 00:54:51.960452 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2026-01-02 00:54:51.960457 | orchestrator | Friday 02 January 2026 00:53:53 +0000 (0:00:01.214) 0:05:26.191 ******** 2026-01-02 00:54:51.960461 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:54:51.960466 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:54:51.960471 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:54:51.960476 | orchestrator | 2026-01-02 00:54:51.960483 | orchestrator | TASK [include_role : tacker] *************************************************** 2026-01-02 00:54:51.960488 | orchestrator | Friday 02 January 2026 00:53:56 +0000 (0:00:02.207) 0:05:28.398 ******** 2026-01-02 00:54:51.960493 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:54:51.960498 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:54:51.960503 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:54:51.960508 | orchestrator | 2026-01-02 00:54:51.960513 | orchestrator | TASK [include_role : trove] **************************************************** 2026-01-02 00:54:51.960517 | orchestrator | Friday 02 January 2026 00:53:56 +0000 (0:00:00.320) 0:05:28.719 ******** 2026-01-02 00:54:51.960522 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:54:51.960527 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:54:51.960532 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:54:51.960536 | orchestrator | 2026-01-02 00:54:51.960541 | orchestrator | TASK [include_role : venus] **************************************************** 2026-01-02 00:54:51.960546 | orchestrator | Friday 02 January 2026 00:53:57 +0000 (0:00:00.619) 0:05:29.339 ******** 2026-01-02 00:54:51.960551 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:54:51.960556 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:54:51.960560 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:54:51.960565 | orchestrator | 2026-01-02 00:54:51.960570 | orchestrator | TASK [include_role : watcher] ************************************************** 2026-01-02 00:54:51.960575 | orchestrator | Friday 02 January 2026 00:53:57 +0000 (0:00:00.307) 0:05:29.647 ******** 2026-01-02 00:54:51.960580 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:54:51.960585 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:54:51.960589 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:54:51.960594 | orchestrator | 2026-01-02 00:54:51.960599 | orchestrator | TASK [include_role : zun] ****************************************************** 2026-01-02 00:54:51.960604 | orchestrator | Friday 02 January 2026 00:53:57 +0000 (0:00:00.346) 0:05:29.994 ******** 2026-01-02 00:54:51.960612 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:54:51.960617 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:54:51.960622 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:54:51.960627 | orchestrator | 2026-01-02 00:54:51.960631 | orchestrator | TASK [include_role : loadbalancer] ********************************************* 2026-01-02 00:54:51.960636 | orchestrator | Friday 02 January 2026 00:53:58 +0000 (0:00:00.336) 0:05:30.330 ******** 2026-01-02 00:54:51.960641 | orchestrator | included: loadbalancer for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:54:51.960646 | orchestrator | 2026-01-02 00:54:51.960651 | orchestrator | TASK [service-check-containers : loadbalancer | Check containers] ************** 2026-01-02 00:54:51.960667 | orchestrator | Friday 02 January 2026 00:53:59 +0000 (0:00:01.750) 0:05:32.081 ******** 2026-01-02 00:54:51.960673 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-01-02 00:54:51.960681 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-01-02 00:54:51.960687 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-01-02 00:54:51.960694 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-02 00:54:51.960700 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-02 00:54:51.960708 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-02 00:54:51.960713 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-02 00:54:51.960718 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-02 00:54:51.960726 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-02 00:54:51.960732 | orchestrator | 2026-01-02 00:54:51.960737 | orchestrator | TASK [service-check-containers : loadbalancer | Notify handlers to restart containers] *** 2026-01-02 00:54:51.960742 | orchestrator | Friday 02 January 2026 00:54:02 +0000 (0:00:02.429) 0:05:34.511 ******** 2026-01-02 00:54:51.960747 | orchestrator | changed: [testbed-node-0] => { 2026-01-02 00:54:51.960752 | orchestrator |  "msg": "Notifying handlers" 2026-01-02 00:54:51.960757 | orchestrator | } 2026-01-02 00:54:51.960762 | orchestrator | changed: [testbed-node-1] => { 2026-01-02 00:54:51.960767 | orchestrator |  "msg": "Notifying handlers" 2026-01-02 00:54:51.960771 | orchestrator | } 2026-01-02 00:54:51.960776 | orchestrator | changed: [testbed-node-2] => { 2026-01-02 00:54:51.960781 | orchestrator |  "msg": "Notifying handlers" 2026-01-02 00:54:51.960786 | orchestrator | } 2026-01-02 00:54:51.960791 | orchestrator | 2026-01-02 00:54:51.960796 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-01-02 00:54:51.960801 | orchestrator | Friday 02 January 2026 00:54:02 +0000 (0:00:00.370) 0:05:34.881 ******** 2026-01-02 00:54:51.960809 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-02 00:54:51.960814 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-02 00:54:51.960826 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-02 00:54:51.960831 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:54:51.960836 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-02 00:54:51.960841 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-02 00:54:51.960849 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-02 00:54:51.960854 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:54:51.960859 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-02 00:54:51.960864 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-02 00:54:51.960874 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-02 00:54:51.960879 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:54:51.960884 | orchestrator | 2026-01-02 00:54:51.960889 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2026-01-02 00:54:51.960894 | orchestrator | Friday 02 January 2026 00:54:04 +0000 (0:00:01.678) 0:05:36.560 ******** 2026-01-02 00:54:51.960899 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:54:51.960904 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:54:51.960909 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:54:51.960913 | orchestrator | 2026-01-02 00:54:51.960918 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2026-01-02 00:54:51.960923 | orchestrator | Friday 02 January 2026 00:54:05 +0000 (0:00:01.202) 0:05:37.762 ******** 2026-01-02 00:54:51.960928 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:54:51.960933 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:54:51.960937 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:54:51.960942 | orchestrator | 2026-01-02 00:54:51.960947 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2026-01-02 00:54:51.960952 | orchestrator | Friday 02 January 2026 00:54:05 +0000 (0:00:00.357) 0:05:38.119 ******** 2026-01-02 00:54:51.960957 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:54:51.960961 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:54:51.960966 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:54:51.960971 | orchestrator | 2026-01-02 00:54:51.960976 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2026-01-02 00:54:51.960981 | orchestrator | Friday 02 January 2026 00:54:06 +0000 (0:00:00.980) 0:05:39.100 ******** 2026-01-02 00:54:51.960986 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:54:51.960990 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:54:51.960995 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:54:51.961000 | orchestrator | 2026-01-02 00:54:51.961005 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2026-01-02 00:54:51.961010 | orchestrator | Friday 02 January 2026 00:54:07 +0000 (0:00:00.968) 0:05:40.068 ******** 2026-01-02 00:54:51.961014 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:54:51.961019 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:54:51.961024 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:54:51.961029 | orchestrator | 2026-01-02 00:54:51.961033 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2026-01-02 00:54:51.961038 | orchestrator | Friday 02 January 2026 00:54:09 +0000 (0:00:01.408) 0:05:41.477 ******** 2026-01-02 00:54:51.961043 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:54:51.961048 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:54:51.961053 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:54:51.961057 | orchestrator | 2026-01-02 00:54:51.961062 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2026-01-02 00:54:51.961099 | orchestrator | Friday 02 January 2026 00:54:18 +0000 (0:00:09.622) 0:05:51.100 ******** 2026-01-02 00:54:51.961109 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:54:51.961114 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:54:51.961119 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:54:51.961123 | orchestrator | 2026-01-02 00:54:51.961136 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2026-01-02 00:54:51.961141 | orchestrator | Friday 02 January 2026 00:54:19 +0000 (0:00:00.818) 0:05:51.919 ******** 2026-01-02 00:54:51.961146 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:54:51.961151 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:54:51.961156 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:54:51.961161 | orchestrator | 2026-01-02 00:54:51.961165 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2026-01-02 00:54:51.961170 | orchestrator | Friday 02 January 2026 00:54:34 +0000 (0:00:15.290) 0:06:07.209 ******** 2026-01-02 00:54:51.961175 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:54:51.961180 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:54:51.961185 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:54:51.961190 | orchestrator | 2026-01-02 00:54:51.961195 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2026-01-02 00:54:51.961199 | orchestrator | Friday 02 January 2026 00:54:36 +0000 (0:00:01.214) 0:06:08.423 ******** 2026-01-02 00:54:51.961204 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:54:51.961209 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:54:51.961214 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:54:51.961218 | orchestrator | 2026-01-02 00:54:51.961223 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2026-01-02 00:54:51.961228 | orchestrator | Friday 02 January 2026 00:54:44 +0000 (0:00:08.709) 0:06:17.133 ******** 2026-01-02 00:54:51.961233 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:54:51.961238 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:54:51.961243 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:54:51.961247 | orchestrator | 2026-01-02 00:54:51.961252 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2026-01-02 00:54:51.961257 | orchestrator | Friday 02 January 2026 00:54:45 +0000 (0:00:00.393) 0:06:17.527 ******** 2026-01-02 00:54:51.961262 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:54:51.961267 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:54:51.961275 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:54:51.961280 | orchestrator | 2026-01-02 00:54:51.961284 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2026-01-02 00:54:51.961289 | orchestrator | Friday 02 January 2026 00:54:45 +0000 (0:00:00.360) 0:06:17.887 ******** 2026-01-02 00:54:51.961294 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:54:51.961299 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:54:51.961303 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:54:51.961308 | orchestrator | 2026-01-02 00:54:51.961313 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2026-01-02 00:54:51.961318 | orchestrator | Friday 02 January 2026 00:54:46 +0000 (0:00:00.721) 0:06:18.608 ******** 2026-01-02 00:54:51.961323 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:54:51.961328 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:54:51.961332 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:54:51.961337 | orchestrator | 2026-01-02 00:54:51.961342 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2026-01-02 00:54:51.961347 | orchestrator | Friday 02 January 2026 00:54:46 +0000 (0:00:00.389) 0:06:18.998 ******** 2026-01-02 00:54:51.961352 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:54:51.961357 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:54:51.961361 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:54:51.961366 | orchestrator | 2026-01-02 00:54:51.961371 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2026-01-02 00:54:51.961376 | orchestrator | Friday 02 January 2026 00:54:47 +0000 (0:00:00.376) 0:06:19.374 ******** 2026-01-02 00:54:51.961381 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:54:51.961386 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:54:51.961390 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:54:51.961395 | orchestrator | 2026-01-02 00:54:51.961400 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2026-01-02 00:54:51.961409 | orchestrator | Friday 02 January 2026 00:54:47 +0000 (0:00:00.371) 0:06:19.745 ******** 2026-01-02 00:54:51.961414 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:54:51.961419 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:54:51.961423 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:54:51.961428 | orchestrator | 2026-01-02 00:54:51.961433 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2026-01-02 00:54:51.961438 | orchestrator | Friday 02 January 2026 00:54:49 +0000 (0:00:01.614) 0:06:21.360 ******** 2026-01-02 00:54:51.961443 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:54:51.961447 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:54:51.961452 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:54:51.961457 | orchestrator | 2026-01-02 00:54:51.961462 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-02 00:54:51.961467 | orchestrator | testbed-node-0 : ok=127  changed=79  unreachable=0 failed=0 skipped=94  rescued=0 ignored=0 2026-01-02 00:54:51.961472 | orchestrator | testbed-node-1 : ok=126  changed=79  unreachable=0 failed=0 skipped=94  rescued=0 ignored=0 2026-01-02 00:54:51.961477 | orchestrator | testbed-node-2 : ok=126  changed=79  unreachable=0 failed=0 skipped=94  rescued=0 ignored=0 2026-01-02 00:54:51.961482 | orchestrator | 2026-01-02 00:54:51.961487 | orchestrator | 2026-01-02 00:54:51.961492 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-02 00:54:51.961497 | orchestrator | Friday 02 January 2026 00:54:49 +0000 (0:00:00.907) 0:06:22.268 ******** 2026-01-02 00:54:51.961502 | orchestrator | =============================================================================== 2026-01-02 00:54:51.961506 | orchestrator | loadbalancer : Start backup proxysql container ------------------------- 15.29s 2026-01-02 00:54:51.961511 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 9.62s 2026-01-02 00:54:51.961516 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 8.71s 2026-01-02 00:54:51.961521 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 6.33s 2026-01-02 00:54:51.961528 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 5.83s 2026-01-02 00:54:51.961533 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 5.48s 2026-01-02 00:54:51.961538 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 5.43s 2026-01-02 00:54:51.961543 | orchestrator | loadbalancer : Copying over proxysql config ----------------------------- 5.31s 2026-01-02 00:54:51.961547 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 4.96s 2026-01-02 00:54:51.961552 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 4.58s 2026-01-02 00:54:51.961557 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 4.48s 2026-01-02 00:54:51.961562 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 4.31s 2026-01-02 00:54:51.961566 | orchestrator | haproxy-config : Copying over placement haproxy config ------------------ 3.95s 2026-01-02 00:54:51.961571 | orchestrator | haproxy-config : Copying over horizon haproxy config -------------------- 3.87s 2026-01-02 00:54:51.961576 | orchestrator | loadbalancer : Copying checks for services which are enabled ------------ 3.86s 2026-01-02 00:54:51.961581 | orchestrator | haproxy-config : Copying over octavia haproxy config -------------------- 3.70s 2026-01-02 00:54:51.961586 | orchestrator | haproxy-config : Copying over manila haproxy config --------------------- 3.70s 2026-01-02 00:54:51.961591 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 3.63s 2026-01-02 00:54:51.961595 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 3.62s 2026-01-02 00:54:51.961600 | orchestrator | haproxy-config : Copying over keystone haproxy config ------------------- 3.51s 2026-01-02 00:54:51.961608 | orchestrator | 2026-01-02 00:54:51 | INFO  | Task 38e35a80-1ec0-4718-935c-1f1c6e56c7a5 is in state STARTED 2026-01-02 00:54:51.961617 | orchestrator | 2026-01-02 00:54:51 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:54:54.980865 | orchestrator | 2026-01-02 00:54:54 | INFO  | Task cb2d8017-1882-4a99-a5a8-bd31076669d9 is in state STARTED 2026-01-02 00:54:54.982347 | orchestrator | 2026-01-02 00:54:54 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:54:54.984142 | orchestrator | 2026-01-02 00:54:54 | INFO  | Task 38e35a80-1ec0-4718-935c-1f1c6e56c7a5 is in state STARTED 2026-01-02 00:54:54.984183 | orchestrator | 2026-01-02 00:54:54 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:54:58.027829 | orchestrator | 2026-01-02 00:54:58 | INFO  | Task cb2d8017-1882-4a99-a5a8-bd31076669d9 is in state STARTED 2026-01-02 00:54:58.028503 | orchestrator | 2026-01-02 00:54:58 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:54:58.029148 | orchestrator | 2026-01-02 00:54:58 | INFO  | Task 38e35a80-1ec0-4718-935c-1f1c6e56c7a5 is in state STARTED 2026-01-02 00:54:58.029181 | orchestrator | 2026-01-02 00:54:58 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:55:01.068562 | orchestrator | 2026-01-02 00:55:01 | INFO  | Task cb2d8017-1882-4a99-a5a8-bd31076669d9 is in state STARTED 2026-01-02 00:55:01.068698 | orchestrator | 2026-01-02 00:55:01 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:55:01.069301 | orchestrator | 2026-01-02 00:55:01 | INFO  | Task 38e35a80-1ec0-4718-935c-1f1c6e56c7a5 is in state STARTED 2026-01-02 00:55:01.069349 | orchestrator | 2026-01-02 00:55:01 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:55:04.106292 | orchestrator | 2026-01-02 00:55:04 | INFO  | Task cb2d8017-1882-4a99-a5a8-bd31076669d9 is in state STARTED 2026-01-02 00:55:04.106579 | orchestrator | 2026-01-02 00:55:04 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:55:04.107485 | orchestrator | 2026-01-02 00:55:04 | INFO  | Task 38e35a80-1ec0-4718-935c-1f1c6e56c7a5 is in state STARTED 2026-01-02 00:55:04.107634 | orchestrator | 2026-01-02 00:55:04 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:55:07.160597 | orchestrator | 2026-01-02 00:55:07 | INFO  | Task cb2d8017-1882-4a99-a5a8-bd31076669d9 is in state STARTED 2026-01-02 00:55:07.162064 | orchestrator | 2026-01-02 00:55:07 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:55:07.164175 | orchestrator | 2026-01-02 00:55:07 | INFO  | Task 38e35a80-1ec0-4718-935c-1f1c6e56c7a5 is in state STARTED 2026-01-02 00:55:07.164571 | orchestrator | 2026-01-02 00:55:07 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:55:10.191567 | orchestrator | 2026-01-02 00:55:10 | INFO  | Task cb2d8017-1882-4a99-a5a8-bd31076669d9 is in state STARTED 2026-01-02 00:55:10.192346 | orchestrator | 2026-01-02 00:55:10 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:55:10.193053 | orchestrator | 2026-01-02 00:55:10 | INFO  | Task 38e35a80-1ec0-4718-935c-1f1c6e56c7a5 is in state STARTED 2026-01-02 00:55:10.193078 | orchestrator | 2026-01-02 00:55:10 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:55:13.227939 | orchestrator | 2026-01-02 00:55:13 | INFO  | Task cb2d8017-1882-4a99-a5a8-bd31076669d9 is in state STARTED 2026-01-02 00:55:13.229664 | orchestrator | 2026-01-02 00:55:13 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:55:13.232221 | orchestrator | 2026-01-02 00:55:13 | INFO  | Task 38e35a80-1ec0-4718-935c-1f1c6e56c7a5 is in state STARTED 2026-01-02 00:55:13.232264 | orchestrator | 2026-01-02 00:55:13 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:55:16.272325 | orchestrator | 2026-01-02 00:55:16 | INFO  | Task cb2d8017-1882-4a99-a5a8-bd31076669d9 is in state STARTED 2026-01-02 00:55:16.273856 | orchestrator | 2026-01-02 00:55:16 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:55:16.274921 | orchestrator | 2026-01-02 00:55:16 | INFO  | Task 38e35a80-1ec0-4718-935c-1f1c6e56c7a5 is in state STARTED 2026-01-02 00:55:16.274942 | orchestrator | 2026-01-02 00:55:16 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:55:19.313751 | orchestrator | 2026-01-02 00:55:19 | INFO  | Task cb2d8017-1882-4a99-a5a8-bd31076669d9 is in state STARTED 2026-01-02 00:55:19.314163 | orchestrator | 2026-01-02 00:55:19 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:55:19.314686 | orchestrator | 2026-01-02 00:55:19 | INFO  | Task 38e35a80-1ec0-4718-935c-1f1c6e56c7a5 is in state STARTED 2026-01-02 00:55:19.314717 | orchestrator | 2026-01-02 00:55:19 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:55:22.357385 | orchestrator | 2026-01-02 00:55:22 | INFO  | Task cb2d8017-1882-4a99-a5a8-bd31076669d9 is in state STARTED 2026-01-02 00:55:22.358811 | orchestrator | 2026-01-02 00:55:22 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:55:22.359829 | orchestrator | 2026-01-02 00:55:22 | INFO  | Task 38e35a80-1ec0-4718-935c-1f1c6e56c7a5 is in state STARTED 2026-01-02 00:55:22.359866 | orchestrator | 2026-01-02 00:55:22 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:55:25.401515 | orchestrator | 2026-01-02 00:55:25 | INFO  | Task cb2d8017-1882-4a99-a5a8-bd31076669d9 is in state STARTED 2026-01-02 00:55:25.402873 | orchestrator | 2026-01-02 00:55:25 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:55:25.405363 | orchestrator | 2026-01-02 00:55:25 | INFO  | Task 38e35a80-1ec0-4718-935c-1f1c6e56c7a5 is in state STARTED 2026-01-02 00:55:25.405407 | orchestrator | 2026-01-02 00:55:25 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:55:28.435517 | orchestrator | 2026-01-02 00:55:28 | INFO  | Task cb2d8017-1882-4a99-a5a8-bd31076669d9 is in state STARTED 2026-01-02 00:55:28.440122 | orchestrator | 2026-01-02 00:55:28 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:55:28.440222 | orchestrator | 2026-01-02 00:55:28 | INFO  | Task 38e35a80-1ec0-4718-935c-1f1c6e56c7a5 is in state STARTED 2026-01-02 00:55:28.440238 | orchestrator | 2026-01-02 00:55:28 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:55:31.476550 | orchestrator | 2026-01-02 00:55:31 | INFO  | Task cb2d8017-1882-4a99-a5a8-bd31076669d9 is in state STARTED 2026-01-02 00:55:31.478006 | orchestrator | 2026-01-02 00:55:31 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:55:31.479573 | orchestrator | 2026-01-02 00:55:31 | INFO  | Task 38e35a80-1ec0-4718-935c-1f1c6e56c7a5 is in state STARTED 2026-01-02 00:55:31.479871 | orchestrator | 2026-01-02 00:55:31 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:55:34.528094 | orchestrator | 2026-01-02 00:55:34 | INFO  | Task cb2d8017-1882-4a99-a5a8-bd31076669d9 is in state STARTED 2026-01-02 00:55:34.528294 | orchestrator | 2026-01-02 00:55:34 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:55:34.529479 | orchestrator | 2026-01-02 00:55:34 | INFO  | Task 38e35a80-1ec0-4718-935c-1f1c6e56c7a5 is in state STARTED 2026-01-02 00:55:34.529521 | orchestrator | 2026-01-02 00:55:34 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:55:37.675328 | orchestrator | 2026-01-02 00:55:37 | INFO  | Task cb2d8017-1882-4a99-a5a8-bd31076669d9 is in state STARTED 2026-01-02 00:55:37.675543 | orchestrator | 2026-01-02 00:55:37 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:55:37.676268 | orchestrator | 2026-01-02 00:55:37 | INFO  | Task 38e35a80-1ec0-4718-935c-1f1c6e56c7a5 is in state STARTED 2026-01-02 00:55:37.676307 | orchestrator | 2026-01-02 00:55:37 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:55:40.720244 | orchestrator | 2026-01-02 00:55:40 | INFO  | Task cb2d8017-1882-4a99-a5a8-bd31076669d9 is in state STARTED 2026-01-02 00:55:40.721254 | orchestrator | 2026-01-02 00:55:40 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:55:40.721899 | orchestrator | 2026-01-02 00:55:40 | INFO  | Task 38e35a80-1ec0-4718-935c-1f1c6e56c7a5 is in state STARTED 2026-01-02 00:55:40.721924 | orchestrator | 2026-01-02 00:55:40 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:55:43.769112 | orchestrator | 2026-01-02 00:55:43 | INFO  | Task cb2d8017-1882-4a99-a5a8-bd31076669d9 is in state STARTED 2026-01-02 00:55:43.770430 | orchestrator | 2026-01-02 00:55:43 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:55:43.776136 | orchestrator | 2026-01-02 00:55:43 | INFO  | Task 38e35a80-1ec0-4718-935c-1f1c6e56c7a5 is in state STARTED 2026-01-02 00:55:43.776181 | orchestrator | 2026-01-02 00:55:43 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:55:46.821722 | orchestrator | 2026-01-02 00:55:46 | INFO  | Task cb2d8017-1882-4a99-a5a8-bd31076669d9 is in state STARTED 2026-01-02 00:55:46.821841 | orchestrator | 2026-01-02 00:55:46 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:55:46.822286 | orchestrator | 2026-01-02 00:55:46 | INFO  | Task 38e35a80-1ec0-4718-935c-1f1c6e56c7a5 is in state STARTED 2026-01-02 00:55:46.822299 | orchestrator | 2026-01-02 00:55:46 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:55:49.865260 | orchestrator | 2026-01-02 00:55:49 | INFO  | Task cb2d8017-1882-4a99-a5a8-bd31076669d9 is in state STARTED 2026-01-02 00:55:49.866218 | orchestrator | 2026-01-02 00:55:49 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:55:49.868024 | orchestrator | 2026-01-02 00:55:49 | INFO  | Task 38e35a80-1ec0-4718-935c-1f1c6e56c7a5 is in state STARTED 2026-01-02 00:55:49.868252 | orchestrator | 2026-01-02 00:55:49 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:55:52.911516 | orchestrator | 2026-01-02 00:55:52 | INFO  | Task cb2d8017-1882-4a99-a5a8-bd31076669d9 is in state STARTED 2026-01-02 00:55:52.912194 | orchestrator | 2026-01-02 00:55:52 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:55:52.914162 | orchestrator | 2026-01-02 00:55:52 | INFO  | Task 38e35a80-1ec0-4718-935c-1f1c6e56c7a5 is in state STARTED 2026-01-02 00:55:52.914644 | orchestrator | 2026-01-02 00:55:52 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:55:55.953241 | orchestrator | 2026-01-02 00:55:55 | INFO  | Task cb2d8017-1882-4a99-a5a8-bd31076669d9 is in state STARTED 2026-01-02 00:55:55.954645 | orchestrator | 2026-01-02 00:55:55 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:55:55.956480 | orchestrator | 2026-01-02 00:55:55 | INFO  | Task 38e35a80-1ec0-4718-935c-1f1c6e56c7a5 is in state STARTED 2026-01-02 00:55:55.956531 | orchestrator | 2026-01-02 00:55:55 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:55:59.009326 | orchestrator | 2026-01-02 00:55:59 | INFO  | Task cb2d8017-1882-4a99-a5a8-bd31076669d9 is in state STARTED 2026-01-02 00:55:59.009464 | orchestrator | 2026-01-02 00:55:59 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:55:59.010323 | orchestrator | 2026-01-02 00:55:59 | INFO  | Task 38e35a80-1ec0-4718-935c-1f1c6e56c7a5 is in state STARTED 2026-01-02 00:55:59.010350 | orchestrator | 2026-01-02 00:55:59 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:56:02.063974 | orchestrator | 2026-01-02 00:56:02 | INFO  | Task cb2d8017-1882-4a99-a5a8-bd31076669d9 is in state STARTED 2026-01-02 00:56:02.065419 | orchestrator | 2026-01-02 00:56:02 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:56:02.067435 | orchestrator | 2026-01-02 00:56:02 | INFO  | Task 38e35a80-1ec0-4718-935c-1f1c6e56c7a5 is in state STARTED 2026-01-02 00:56:02.068154 | orchestrator | 2026-01-02 00:56:02 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:56:05.116723 | orchestrator | 2026-01-02 00:56:05 | INFO  | Task cb2d8017-1882-4a99-a5a8-bd31076669d9 is in state STARTED 2026-01-02 00:56:05.119560 | orchestrator | 2026-01-02 00:56:05 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:56:05.121753 | orchestrator | 2026-01-02 00:56:05 | INFO  | Task 38e35a80-1ec0-4718-935c-1f1c6e56c7a5 is in state STARTED 2026-01-02 00:56:05.121787 | orchestrator | 2026-01-02 00:56:05 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:56:08.165933 | orchestrator | 2026-01-02 00:56:08 | INFO  | Task cb2d8017-1882-4a99-a5a8-bd31076669d9 is in state STARTED 2026-01-02 00:56:08.169332 | orchestrator | 2026-01-02 00:56:08 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:56:08.171963 | orchestrator | 2026-01-02 00:56:08 | INFO  | Task 38e35a80-1ec0-4718-935c-1f1c6e56c7a5 is in state STARTED 2026-01-02 00:56:08.171986 | orchestrator | 2026-01-02 00:56:08 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:56:11.236671 | orchestrator | 2026-01-02 00:56:11 | INFO  | Task cb2d8017-1882-4a99-a5a8-bd31076669d9 is in state STARTED 2026-01-02 00:56:11.242070 | orchestrator | 2026-01-02 00:56:11 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:56:11.243976 | orchestrator | 2026-01-02 00:56:11 | INFO  | Task 38e35a80-1ec0-4718-935c-1f1c6e56c7a5 is in state STARTED 2026-01-02 00:56:11.244421 | orchestrator | 2026-01-02 00:56:11 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:56:14.294980 | orchestrator | 2026-01-02 00:56:14 | INFO  | Task cb2d8017-1882-4a99-a5a8-bd31076669d9 is in state STARTED 2026-01-02 00:56:14.297000 | orchestrator | 2026-01-02 00:56:14 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:56:14.299012 | orchestrator | 2026-01-02 00:56:14 | INFO  | Task 38e35a80-1ec0-4718-935c-1f1c6e56c7a5 is in state STARTED 2026-01-02 00:56:14.299075 | orchestrator | 2026-01-02 00:56:14 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:56:17.341386 | orchestrator | 2026-01-02 00:56:17 | INFO  | Task cb2d8017-1882-4a99-a5a8-bd31076669d9 is in state STARTED 2026-01-02 00:56:17.342848 | orchestrator | 2026-01-02 00:56:17 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:56:17.345156 | orchestrator | 2026-01-02 00:56:17 | INFO  | Task 38e35a80-1ec0-4718-935c-1f1c6e56c7a5 is in state STARTED 2026-01-02 00:56:17.345198 | orchestrator | 2026-01-02 00:56:17 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:56:20.394767 | orchestrator | 2026-01-02 00:56:20 | INFO  | Task cb2d8017-1882-4a99-a5a8-bd31076669d9 is in state STARTED 2026-01-02 00:56:20.396524 | orchestrator | 2026-01-02 00:56:20 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:56:20.398281 | orchestrator | 2026-01-02 00:56:20 | INFO  | Task 38e35a80-1ec0-4718-935c-1f1c6e56c7a5 is in state STARTED 2026-01-02 00:56:20.398310 | orchestrator | 2026-01-02 00:56:20 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:56:23.448560 | orchestrator | 2026-01-02 00:56:23 | INFO  | Task cb2d8017-1882-4a99-a5a8-bd31076669d9 is in state STARTED 2026-01-02 00:56:23.450550 | orchestrator | 2026-01-02 00:56:23 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:56:23.451464 | orchestrator | 2026-01-02 00:56:23 | INFO  | Task 38e35a80-1ec0-4718-935c-1f1c6e56c7a5 is in state STARTED 2026-01-02 00:56:23.451494 | orchestrator | 2026-01-02 00:56:23 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:56:26.480296 | orchestrator | 2026-01-02 00:56:26 | INFO  | Task cb2d8017-1882-4a99-a5a8-bd31076669d9 is in state STARTED 2026-01-02 00:56:26.481642 | orchestrator | 2026-01-02 00:56:26 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:56:26.483022 | orchestrator | 2026-01-02 00:56:26 | INFO  | Task 38e35a80-1ec0-4718-935c-1f1c6e56c7a5 is in state STARTED 2026-01-02 00:56:26.483083 | orchestrator | 2026-01-02 00:56:26 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:56:29.521908 | orchestrator | 2026-01-02 00:56:29 | INFO  | Task cb2d8017-1882-4a99-a5a8-bd31076669d9 is in state STARTED 2026-01-02 00:56:29.522252 | orchestrator | 2026-01-02 00:56:29 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:56:29.523077 | orchestrator | 2026-01-02 00:56:29 | INFO  | Task 38e35a80-1ec0-4718-935c-1f1c6e56c7a5 is in state STARTED 2026-01-02 00:56:29.523144 | orchestrator | 2026-01-02 00:56:29 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:56:32.558729 | orchestrator | 2026-01-02 00:56:32 | INFO  | Task cb2d8017-1882-4a99-a5a8-bd31076669d9 is in state STARTED 2026-01-02 00:56:32.560203 | orchestrator | 2026-01-02 00:56:32 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:56:32.561182 | orchestrator | 2026-01-02 00:56:32 | INFO  | Task 38e35a80-1ec0-4718-935c-1f1c6e56c7a5 is in state STARTED 2026-01-02 00:56:32.561216 | orchestrator | 2026-01-02 00:56:32 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:56:35.601104 | orchestrator | 2026-01-02 00:56:35 | INFO  | Task cb2d8017-1882-4a99-a5a8-bd31076669d9 is in state STARTED 2026-01-02 00:56:35.602342 | orchestrator | 2026-01-02 00:56:35 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:56:35.604623 | orchestrator | 2026-01-02 00:56:35 | INFO  | Task 38e35a80-1ec0-4718-935c-1f1c6e56c7a5 is in state STARTED 2026-01-02 00:56:35.604966 | orchestrator | 2026-01-02 00:56:35 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:56:38.645805 | orchestrator | 2026-01-02 00:56:38 | INFO  | Task cb2d8017-1882-4a99-a5a8-bd31076669d9 is in state STARTED 2026-01-02 00:56:38.646999 | orchestrator | 2026-01-02 00:56:38 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:56:38.649276 | orchestrator | 2026-01-02 00:56:38 | INFO  | Task 38e35a80-1ec0-4718-935c-1f1c6e56c7a5 is in state STARTED 2026-01-02 00:56:38.649323 | orchestrator | 2026-01-02 00:56:38 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:56:41.694162 | orchestrator | 2026-01-02 00:56:41 | INFO  | Task cb2d8017-1882-4a99-a5a8-bd31076669d9 is in state STARTED 2026-01-02 00:56:41.695848 | orchestrator | 2026-01-02 00:56:41 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:56:41.698055 | orchestrator | 2026-01-02 00:56:41 | INFO  | Task 38e35a80-1ec0-4718-935c-1f1c6e56c7a5 is in state STARTED 2026-01-02 00:56:41.698116 | orchestrator | 2026-01-02 00:56:41 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:56:44.738646 | orchestrator | 2026-01-02 00:56:44 | INFO  | Task cb2d8017-1882-4a99-a5a8-bd31076669d9 is in state STARTED 2026-01-02 00:56:44.738745 | orchestrator | 2026-01-02 00:56:44 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:56:44.739491 | orchestrator | 2026-01-02 00:56:44 | INFO  | Task 38e35a80-1ec0-4718-935c-1f1c6e56c7a5 is in state STARTED 2026-01-02 00:56:44.739512 | orchestrator | 2026-01-02 00:56:44 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:56:47.788840 | orchestrator | 2026-01-02 00:56:47 | INFO  | Task cb2d8017-1882-4a99-a5a8-bd31076669d9 is in state STARTED 2026-01-02 00:56:47.790643 | orchestrator | 2026-01-02 00:56:47 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:56:47.792827 | orchestrator | 2026-01-02 00:56:47 | INFO  | Task 38e35a80-1ec0-4718-935c-1f1c6e56c7a5 is in state STARTED 2026-01-02 00:56:47.792889 | orchestrator | 2026-01-02 00:56:47 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:56:50.837145 | orchestrator | 2026-01-02 00:56:50 | INFO  | Task cb2d8017-1882-4a99-a5a8-bd31076669d9 is in state STARTED 2026-01-02 00:56:50.838426 | orchestrator | 2026-01-02 00:56:50 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:56:50.839943 | orchestrator | 2026-01-02 00:56:50 | INFO  | Task 38e35a80-1ec0-4718-935c-1f1c6e56c7a5 is in state STARTED 2026-01-02 00:56:50.840140 | orchestrator | 2026-01-02 00:56:50 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:56:53.881396 | orchestrator | 2026-01-02 00:56:53 | INFO  | Task cb2d8017-1882-4a99-a5a8-bd31076669d9 is in state STARTED 2026-01-02 00:56:53.883285 | orchestrator | 2026-01-02 00:56:53 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state STARTED 2026-01-02 00:56:53.887126 | orchestrator | 2026-01-02 00:56:53 | INFO  | Task 38e35a80-1ec0-4718-935c-1f1c6e56c7a5 is in state STARTED 2026-01-02 00:56:53.887694 | orchestrator | 2026-01-02 00:56:53 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:56:56.932440 | orchestrator | 2026-01-02 00:56:56 | INFO  | Task cb2d8017-1882-4a99-a5a8-bd31076669d9 is in state STARTED 2026-01-02 00:56:56.941858 | orchestrator | 2026-01-02 00:56:56 | INFO  | Task bac3598c-0a39-4691-a06f-95ba6bc970e6 is in state SUCCESS 2026-01-02 00:56:56.943922 | orchestrator | 2026-01-02 00:56:56.943973 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-01-02 00:56:56.943988 | orchestrator | 2.16.14 2026-01-02 00:56:56.944001 | orchestrator | 2026-01-02 00:56:56.944013 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2026-01-02 00:56:56.944025 | orchestrator | 2026-01-02 00:56:56.944036 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-01-02 00:56:56.944048 | orchestrator | Friday 02 January 2026 00:46:06 +0000 (0:00:00.603) 0:00:00.603 ******** 2026-01-02 00:56:56.944060 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:56:56.944073 | orchestrator | 2026-01-02 00:56:56.944084 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-01-02 00:56:56.944095 | orchestrator | Friday 02 January 2026 00:46:07 +0000 (0:00:00.998) 0:00:01.602 ******** 2026-01-02 00:56:56.944106 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:56:56.944117 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:56:56.944158 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:56:56.944309 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:56:56.944326 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:56:56.944377 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:56:56.944390 | orchestrator | 2026-01-02 00:56:56.944408 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-01-02 00:56:56.944478 | orchestrator | Friday 02 January 2026 00:46:08 +0000 (0:00:01.352) 0:00:02.954 ******** 2026-01-02 00:56:56.944496 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:56:56.944511 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:56:56.944562 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:56:56.944575 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:56:56.944586 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:56:56.944599 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:56:56.944611 | orchestrator | 2026-01-02 00:56:56.944623 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-01-02 00:56:56.944636 | orchestrator | Friday 02 January 2026 00:46:09 +0000 (0:00:00.763) 0:00:03.717 ******** 2026-01-02 00:56:56.944648 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:56:56.944661 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:56:56.944673 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:56:56.944685 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:56:56.944697 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:56:56.944710 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:56:56.944722 | orchestrator | 2026-01-02 00:56:56.944735 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-01-02 00:56:56.944748 | orchestrator | Friday 02 January 2026 00:46:10 +0000 (0:00:00.905) 0:00:04.623 ******** 2026-01-02 00:56:56.944760 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:56:56.944804 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:56:56.944817 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:56:56.944842 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:56:56.944853 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:56:56.944864 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:56:56.944875 | orchestrator | 2026-01-02 00:56:56.944886 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-01-02 00:56:56.944897 | orchestrator | Friday 02 January 2026 00:46:11 +0000 (0:00:00.760) 0:00:05.383 ******** 2026-01-02 00:56:56.944908 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:56:56.944918 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:56:56.944931 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:56:56.944949 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:56:56.944965 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:56:56.944979 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:56:56.944990 | orchestrator | 2026-01-02 00:56:56.945001 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-01-02 00:56:56.945012 | orchestrator | Friday 02 January 2026 00:46:11 +0000 (0:00:00.529) 0:00:05.913 ******** 2026-01-02 00:56:56.945023 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:56:56.945034 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:56:56.945045 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:56:56.945055 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:56:56.945066 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:56:56.945227 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:56:56.945247 | orchestrator | 2026-01-02 00:56:56.945266 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-01-02 00:56:56.945282 | orchestrator | Friday 02 January 2026 00:46:12 +0000 (0:00:00.715) 0:00:06.629 ******** 2026-01-02 00:56:56.945299 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:56:56.945311 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:56:56.945322 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:56:56.945333 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:56.945343 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:56.945354 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:56.945365 | orchestrator | 2026-01-02 00:56:56.945376 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-01-02 00:56:56.945397 | orchestrator | Friday 02 January 2026 00:46:13 +0000 (0:00:00.658) 0:00:07.288 ******** 2026-01-02 00:56:56.945408 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:56:56.945419 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:56:56.945430 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:56:56.945440 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:56:56.945451 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:56:56.945462 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:56:56.945472 | orchestrator | 2026-01-02 00:56:56.945484 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-01-02 00:56:56.945494 | orchestrator | Friday 02 January 2026 00:46:14 +0000 (0:00:01.112) 0:00:08.400 ******** 2026-01-02 00:56:56.945505 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-02 00:56:56.945516 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-02 00:56:56.945557 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-02 00:56:56.945569 | orchestrator | 2026-01-02 00:56:56.945581 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-01-02 00:56:56.945592 | orchestrator | Friday 02 January 2026 00:46:15 +0000 (0:00:00.727) 0:00:09.128 ******** 2026-01-02 00:56:56.945602 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:56:56.945648 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:56:56.945661 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:56:56.945688 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:56:56.945700 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:56:56.945711 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:56:56.945721 | orchestrator | 2026-01-02 00:56:56.945732 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-01-02 00:56:56.945744 | orchestrator | Friday 02 January 2026 00:46:16 +0000 (0:00:01.434) 0:00:10.562 ******** 2026-01-02 00:56:56.945755 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-02 00:56:56.945766 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-02 00:56:56.945776 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-02 00:56:56.945787 | orchestrator | 2026-01-02 00:56:56.945798 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-01-02 00:56:56.945809 | orchestrator | Friday 02 January 2026 00:46:19 +0000 (0:00:02.729) 0:00:13.292 ******** 2026-01-02 00:56:56.945821 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-01-02 00:56:56.945832 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-01-02 00:56:56.945843 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-01-02 00:56:56.945854 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:56:56.945865 | orchestrator | 2026-01-02 00:56:56.945876 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-01-02 00:56:56.945887 | orchestrator | Friday 02 January 2026 00:46:20 +0000 (0:00:00.916) 0:00:14.208 ******** 2026-01-02 00:56:56.945900 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-01-02 00:56:56.945914 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-01-02 00:56:56.945926 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-01-02 00:56:56.945944 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:56:56.945956 | orchestrator | 2026-01-02 00:56:56.945967 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-01-02 00:56:56.945986 | orchestrator | Friday 02 January 2026 00:46:21 +0000 (0:00:01.429) 0:00:15.638 ******** 2026-01-02 00:56:56.945999 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-02 00:56:56.946071 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-02 00:56:56.946097 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-02 00:56:56.946114 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:56:56.946133 | orchestrator | 2026-01-02 00:56:56.946272 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-01-02 00:56:56.946297 | orchestrator | Friday 02 January 2026 00:46:22 +0000 (0:00:00.450) 0:00:16.088 ******** 2026-01-02 00:56:56.946336 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-01-02 00:46:17.121256', 'end': '2026-01-02 00:46:17.427858', 'delta': '0:00:00.306602', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-01-02 00:56:56.946362 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-01-02 00:46:17.961561', 'end': '2026-01-02 00:46:18.221869', 'delta': '0:00:00.260308', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-01-02 00:56:56.946383 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-01-02 00:46:18.773388', 'end': '2026-01-02 00:46:19.049659', 'delta': '0:00:00.276271', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-01-02 00:56:56.946406 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:56:56.946417 | orchestrator | 2026-01-02 00:56:56.946428 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-01-02 00:56:56.946439 | orchestrator | Friday 02 January 2026 00:46:22 +0000 (0:00:00.362) 0:00:16.451 ******** 2026-01-02 00:56:56.946450 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:56:56.946469 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:56:56.946480 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:56:56.946491 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:56:56.946502 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:56:56.946513 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:56:56.946523 | orchestrator | 2026-01-02 00:56:56.946695 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-01-02 00:56:56.946716 | orchestrator | Friday 02 January 2026 00:46:24 +0000 (0:00:01.937) 0:00:18.388 ******** 2026-01-02 00:56:56.946727 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-02 00:56:56.946739 | orchestrator | 2026-01-02 00:56:56.946750 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-01-02 00:56:56.946761 | orchestrator | Friday 02 January 2026 00:46:25 +0000 (0:00:00.887) 0:00:19.276 ******** 2026-01-02 00:56:56.946772 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:56:56.946783 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:56:56.946794 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:56:56.946805 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:56.946816 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:56.946827 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:56.946837 | orchestrator | 2026-01-02 00:56:56.946848 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-01-02 00:56:56.946859 | orchestrator | Friday 02 January 2026 00:46:26 +0000 (0:00:01.629) 0:00:20.906 ******** 2026-01-02 00:56:56.946870 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:56:56.946880 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:56:56.946891 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:56:56.946902 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:56.946912 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:56.946923 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:56.946934 | orchestrator | 2026-01-02 00:56:56.946945 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-01-02 00:56:56.946956 | orchestrator | Friday 02 January 2026 00:46:28 +0000 (0:00:01.441) 0:00:22.347 ******** 2026-01-02 00:56:56.946967 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:56:56.946977 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:56:56.946988 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:56:56.946999 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:56.947009 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:56.947020 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:56.947031 | orchestrator | 2026-01-02 00:56:56.947042 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-01-02 00:56:56.947053 | orchestrator | Friday 02 January 2026 00:46:29 +0000 (0:00:01.022) 0:00:23.370 ******** 2026-01-02 00:56:56.947064 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:56:56.947075 | orchestrator | 2026-01-02 00:56:56.947085 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-01-02 00:56:56.947096 | orchestrator | Friday 02 January 2026 00:46:29 +0000 (0:00:00.285) 0:00:23.655 ******** 2026-01-02 00:56:56.947107 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:56:56.947117 | orchestrator | 2026-01-02 00:56:56.947127 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-01-02 00:56:56.947137 | orchestrator | Friday 02 January 2026 00:46:29 +0000 (0:00:00.222) 0:00:23.877 ******** 2026-01-02 00:56:56.947147 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:56:56.947157 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:56:56.947177 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:56:56.947211 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:56.947222 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:56.947231 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:56.947241 | orchestrator | 2026-01-02 00:56:56.947251 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-01-02 00:56:56.947261 | orchestrator | Friday 02 January 2026 00:46:30 +0000 (0:00:00.837) 0:00:24.715 ******** 2026-01-02 00:56:56.947270 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:56:56.947295 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:56:56.947305 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:56:56.947325 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:56.947335 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:56.947344 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:56.947354 | orchestrator | 2026-01-02 00:56:56.947663 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-01-02 00:56:56.947676 | orchestrator | Friday 02 January 2026 00:46:31 +0000 (0:00:00.720) 0:00:25.435 ******** 2026-01-02 00:56:56.947686 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:56:56.947695 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:56:56.947705 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:56:56.947715 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:56.947725 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:56.947734 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:56.947761 | orchestrator | 2026-01-02 00:56:56.947772 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-01-02 00:56:56.947782 | orchestrator | Friday 02 January 2026 00:46:32 +0000 (0:00:00.765) 0:00:26.200 ******** 2026-01-02 00:56:56.947791 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:56:56.947801 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:56:56.947810 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:56:56.947820 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:56.947829 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:56.947839 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:56.947848 | orchestrator | 2026-01-02 00:56:56.947858 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-01-02 00:56:56.947868 | orchestrator | Friday 02 January 2026 00:46:32 +0000 (0:00:00.756) 0:00:26.957 ******** 2026-01-02 00:56:56.947877 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:56:56.947887 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:56:56.947928 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:56:56.947940 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:56.947950 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:56.947961 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:56.947984 | orchestrator | 2026-01-02 00:56:56.948031 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-01-02 00:56:56.948042 | orchestrator | Friday 02 January 2026 00:46:33 +0000 (0:00:00.649) 0:00:27.606 ******** 2026-01-02 00:56:56.948061 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:56:56.948086 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:56:56.948246 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:56:56.948332 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:56.948392 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:56.948411 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:56.948426 | orchestrator | 2026-01-02 00:56:56.948438 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-01-02 00:56:56.948451 | orchestrator | Friday 02 January 2026 00:46:34 +0000 (0:00:00.841) 0:00:28.447 ******** 2026-01-02 00:56:56.948469 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:56:56.948488 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:56:56.948505 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:56:56.948523 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:56.948559 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:56.948614 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:56.948641 | orchestrator | 2026-01-02 00:56:56.948652 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-01-02 00:56:56.948663 | orchestrator | Friday 02 January 2026 00:46:34 +0000 (0:00:00.500) 0:00:28.948 ******** 2026-01-02 00:56:56.948677 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--fa5ccc98--5ec0--5843--b525--cc12dffb9804-osd--block--fa5ccc98--5ec0--5843--b525--cc12dffb9804', 'dm-uuid-LVM-oDCsQFqAfSa6dNekR7EBkGs45lHB6rEjxEg56fF9bwXURWTj0WU6ut4LrqAmuF07'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-02 00:56:56.948691 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--1e1b73ff--0d48--5f4d--91db--a8c1f08fc0ce-osd--block--1e1b73ff--0d48--5f4d--91db--a8c1f08fc0ce', 'dm-uuid-LVM-rgPLZ8vXO2nPfvKWTpklEG3SJiaYe7YGxF8ENWvpzmak4GVoCkqMJeuWh4TUAQDw'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-02 00:56:56.948716 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-02 00:56:56.948730 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-02 00:56:56.948742 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-02 00:56:56.948753 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-02 00:56:56.948771 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-02 00:56:56.948782 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-02 00:56:56.948815 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--319da19b--b53c--570d--92cc--c377bf830026-osd--block--319da19b--b53c--570d--92cc--c377bf830026', 'dm-uuid-LVM-ToydGqMz0NdFJYSFD2nnthvxr0L1N1tYjsqzGrrOyxGznwoThZWKqX3aNqfblJT5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-02 00:56:56.948829 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-02 00:56:56.948840 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--aabdb1ab--3cea--5cae--90fa--5f0cfaabc1a0-osd--block--aabdb1ab--3cea--5cae--90fa--5f0cfaabc1a0', 'dm-uuid-LVM-N7La7drEyxNXHeLll3TwIuNRGF3K0bJub4Af73ag0HaEEuIXfs3i8QX2i65zWrmU'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-02 00:56:56.948860 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-02 00:56:56.948872 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-02 00:56:56.948883 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-02 00:56:56.948902 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_305350a8-4399-44d3-b2ea-bbf7b9eb7f90', 'scsi-SQEMU_QEMU_HARDDISK_305350a8-4399-44d3-b2ea-bbf7b9eb7f90'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_305350a8-4399-44d3-b2ea-bbf7b9eb7f90-part1', 'scsi-SQEMU_QEMU_HARDDISK_305350a8-4399-44d3-b2ea-bbf7b9eb7f90-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_305350a8-4399-44d3-b2ea-bbf7b9eb7f90-part14', 'scsi-SQEMU_QEMU_HARDDISK_305350a8-4399-44d3-b2ea-bbf7b9eb7f90-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_305350a8-4399-44d3-b2ea-bbf7b9eb7f90-part15', 'scsi-SQEMU_QEMU_HARDDISK_305350a8-4399-44d3-b2ea-bbf7b9eb7f90-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_305350a8-4399-44d3-b2ea-bbf7b9eb7f90-part16', 'scsi-SQEMU_QEMU_HARDDISK_305350a8-4399-44d3-b2ea-bbf7b9eb7f90-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-02 00:56:56.948925 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-02 00:56:56.948945 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--fa5ccc98--5ec0--5843--b525--cc12dffb9804-osd--block--fa5ccc98--5ec0--5843--b525--cc12dffb9804'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-j6yrGS-2HWP-4VVF-va30-HDvZ-1RQB-VvRL68', 'scsi-0QEMU_QEMU_HARDDISK_610525bf-123e-48f5-8f72-a088231f73d4', 'scsi-SQEMU_QEMU_HARDDISK_610525bf-123e-48f5-8f72-a088231f73d4'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-02 00:56:56.948958 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-02 00:56:56.948970 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--1e1b73ff--0d48--5f4d--91db--a8c1f08fc0ce-osd--block--1e1b73ff--0d48--5f4d--91db--a8c1f08fc0ce'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-yzDxoC-avzk-Rjpo-kCJw-Mmt6-wfd1-UiS9Nm', 'scsi-0QEMU_QEMU_HARDDISK_d0e027c6-7483-4a58-a550-b5020c348e91', 'scsi-SQEMU_QEMU_HARDDISK_d0e027c6-7483-4a58-a550-b5020c348e91'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-02 00:56:56.948994 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-02 00:56:56.949012 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_88e6ca38-e9bc-414f-be79-2564fe6ee507', 'scsi-SQEMU_QEMU_HARDDISK_88e6ca38-e9bc-414f-be79-2564fe6ee507'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-02 00:56:56.949025 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-02 00:56:56.949037 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-02 00:56:56.949048 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-02 00:56:56.949067 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-02-00-03-27-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-02 00:56:56.949279 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_33c14286-c543-4ffd-bb9f-b1db90f604b2', 'scsi-SQEMU_QEMU_HARDDISK_33c14286-c543-4ffd-bb9f-b1db90f604b2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_33c14286-c543-4ffd-bb9f-b1db90f604b2-part1', 'scsi-SQEMU_QEMU_HARDDISK_33c14286-c543-4ffd-bb9f-b1db90f604b2-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_33c14286-c543-4ffd-bb9f-b1db90f604b2-part14', 'scsi-SQEMU_QEMU_HARDDISK_33c14286-c543-4ffd-bb9f-b1db90f604b2-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_33c14286-c543-4ffd-bb9f-b1db90f604b2-part15', 'scsi-SQEMU_QEMU_HARDDISK_33c14286-c543-4ffd-bb9f-b1db90f604b2-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_33c14286-c543-4ffd-bb9f-b1db90f604b2-part16', 'scsi-SQEMU_QEMU_HARDDISK_33c14286-c543-4ffd-bb9f-b1db90f604b2-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-02 00:56:56.949303 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--319da19b--b53c--570d--92cc--c377bf830026-osd--block--319da19b--b53c--570d--92cc--c377bf830026'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-fSKb5S-Nu4n-cIZx-pD2I-gqqM-M0Nc-VOHToN', 'scsi-0QEMU_QEMU_HARDDISK_a863269e-8a4c-456a-8159-1ce463f39daf', 'scsi-SQEMU_QEMU_HARDDISK_a863269e-8a4c-456a-8159-1ce463f39daf'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-02 00:56:56.949316 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--aabdb1ab--3cea--5cae--90fa--5f0cfaabc1a0-osd--block--aabdb1ab--3cea--5cae--90fa--5f0cfaabc1a0'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-AVPnHs-1Dx7-4kF9-CcXu-Zlii-dAN1-E780FS', 'scsi-0QEMU_QEMU_HARDDISK_2fd5b446-fd37-4cff-9553-5df2f9404005', 'scsi-SQEMU_QEMU_HARDDISK_2fd5b446-fd37-4cff-9553-5df2f9404005'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-02 00:56:56.949336 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1ff5c3cd-25d4-4cde-ba93-f5b1aecf565f', 'scsi-SQEMU_QEMU_HARDDISK_1ff5c3cd-25d4-4cde-ba93-f5b1aecf565f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-02 00:56:56.949348 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-02-00-03-33-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-02 00:56:56.949360 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--804dd052--7dd8--5ffa--9f76--70ebd20e36f7-osd--block--804dd052--7dd8--5ffa--9f76--70ebd20e36f7', 'dm-uuid-LVM-4qmPLn1HPIxZ6ZQiaCj89Um5tNbz0sJOm6PhWUzXHFPQIjgyz4hhCkw7KRfcZnKN'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-02 00:56:56.949382 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8699efe3--2ea7--5359--bcef--4eac218b02a9-osd--block--8699efe3--2ea7--5359--bcef--4eac218b02a9', 'dm-uuid-LVM-v4tjCEpOdr47Lgc9wIGUVShY664D9DcD3Ev00n7LAoSYGXj51xMrUOA7qw9s8nOi'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-02 00:56:56.949394 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-02 00:56:56.949405 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-02 00:56:56.949417 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-02 00:56:56.949428 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:56:56.949439 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-02 00:56:56.949458 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-02 00:56:56.949470 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-02 00:56:56.949481 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-02 00:56:56.949500 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-02 00:56:56.949516 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-02 00:56:56.949563 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-02 00:56:56.949588 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_551f95f5-89d5-4c4c-89d2-f5559c6efa3b', 'scsi-SQEMU_QEMU_HARDDISK_551f95f5-89d5-4c4c-89d2-f5559c6efa3b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_551f95f5-89d5-4c4c-89d2-f5559c6efa3b-part1', 'scsi-SQEMU_QEMU_HARDDISK_551f95f5-89d5-4c4c-89d2-f5559c6efa3b-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_551f95f5-89d5-4c4c-89d2-f5559c6efa3b-part14', 'scsi-SQEMU_QEMU_HARDDISK_551f95f5-89d5-4c4c-89d2-f5559c6efa3b-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_551f95f5-89d5-4c4c-89d2-f5559c6efa3b-part15', 'scsi-SQEMU_QEMU_HARDDISK_551f95f5-89d5-4c4c-89d2-f5559c6efa3b-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_551f95f5-89d5-4c4c-89d2-f5559c6efa3b-part16', 'scsi-SQEMU_QEMU_HARDDISK_551f95f5-89d5-4c4c-89d2-f5559c6efa3b-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-02 00:56:56.949602 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-02 00:56:56.949614 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--804dd052--7dd8--5ffa--9f76--70ebd20e36f7-osd--block--804dd052--7dd8--5ffa--9f76--70ebd20e36f7'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-bHjTGW-M7Md-dGH2-1DRX-CahX-HQsY-ZSTrcJ', 'scsi-0QEMU_QEMU_HARDDISK_26e4f97c-d63e-4b12-851b-95c853c7feee', 'scsi-SQEMU_QEMU_HARDDISK_26e4f97c-d63e-4b12-851b-95c853c7feee'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-02 00:56:56.949639 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-02 00:56:56.949650 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-02 00:56:56.949662 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--8699efe3--2ea7--5359--bcef--4eac218b02a9-osd--block--8699efe3--2ea7--5359--bcef--4eac218b02a9'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-YQAhjU-OSZ2-cMKq-byOK-KbNp-3kb1-CEBdqs', 'scsi-0QEMU_QEMU_HARDDISK_afdcae1f-177b-4712-b40b-94f97a828de8', 'scsi-SQEMU_QEMU_HARDDISK_afdcae1f-177b-4712-b40b-94f97a828de8'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-02 00:56:56.949673 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-02 00:56:56.949692 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ec88668e-8f3f-41b2-9ae8-d99cc1bb8c9e', 'scsi-SQEMU_QEMU_HARDDISK_ec88668e-8f3f-41b2-9ae8-d99cc1bb8c9e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-02 00:56:56.949705 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-02 00:56:56.949716 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-02-00-03-36-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-02 00:56:56.949735 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-02 00:56:56.949752 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0e3598a5-4260-4049-9c1b-f2c6dbf7b4cc', 'scsi-SQEMU_QEMU_HARDDISK_0e3598a5-4260-4049-9c1b-f2c6dbf7b4cc'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0e3598a5-4260-4049-9c1b-f2c6dbf7b4cc-part1', 'scsi-SQEMU_QEMU_HARDDISK_0e3598a5-4260-4049-9c1b-f2c6dbf7b4cc-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0e3598a5-4260-4049-9c1b-f2c6dbf7b4cc-part14', 'scsi-SQEMU_QEMU_HARDDISK_0e3598a5-4260-4049-9c1b-f2c6dbf7b4cc-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0e3598a5-4260-4049-9c1b-f2c6dbf7b4cc-part15', 'scsi-SQEMU_QEMU_HARDDISK_0e3598a5-4260-4049-9c1b-f2c6dbf7b4cc-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0e3598a5-4260-4049-9c1b-f2c6dbf7b4cc-part16', 'scsi-SQEMU_QEMU_HARDDISK_0e3598a5-4260-4049-9c1b-f2c6dbf7b4cc-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-02 00:56:56.949806 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-02-00-03-25-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-02 00:56:56.949820 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:56:56.949832 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-02 00:56:56.949870 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-02 00:56:56.949886 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-02 00:56:56.949909 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-02 00:56:56.949921 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-02 00:56:56.949932 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-02 00:56:56.949943 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-02 00:56:56.949954 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-02 00:56:56.949976 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d753ec4e-79c3-49c7-ab6d-1296ff31fb58', 'scsi-SQEMU_QEMU_HARDDISK_d753ec4e-79c3-49c7-ab6d-1296ff31fb58'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d753ec4e-79c3-49c7-ab6d-1296ff31fb58-part1', 'scsi-SQEMU_QEMU_HARDDISK_d753ec4e-79c3-49c7-ab6d-1296ff31fb58-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d753ec4e-79c3-49c7-ab6d-1296ff31fb58-part14', 'scsi-SQEMU_QEMU_HARDDISK_d753ec4e-79c3-49c7-ab6d-1296ff31fb58-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d753ec4e-79c3-49c7-ab6d-1296ff31fb58-part15', 'scsi-SQEMU_QEMU_HARDDISK_d753ec4e-79c3-49c7-ab6d-1296ff31fb58-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d753ec4e-79c3-49c7-ab6d-1296ff31fb58-part16', 'scsi-SQEMU_QEMU_HARDDISK_d753ec4e-79c3-49c7-ab6d-1296ff31fb58-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-02 00:56:56.950004 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-02-00-03-31-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-02 00:56:56.950054 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:56:56.950066 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:56.950077 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:56.950089 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-02 00:56:56.950100 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-02 00:56:56.950111 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-02 00:56:56.950122 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-02 00:56:56.950142 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-02 00:56:56.950168 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-02 00:56:56.950187 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-02 00:56:56.950206 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-02 00:56:56.950228 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bc584711-153a-497e-a318-857c0ea51dad', 'scsi-SQEMU_QEMU_HARDDISK_bc584711-153a-497e-a318-857c0ea51dad'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bc584711-153a-497e-a318-857c0ea51dad-part1', 'scsi-SQEMU_QEMU_HARDDISK_bc584711-153a-497e-a318-857c0ea51dad-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bc584711-153a-497e-a318-857c0ea51dad-part14', 'scsi-SQEMU_QEMU_HARDDISK_bc584711-153a-497e-a318-857c0ea51dad-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bc584711-153a-497e-a318-857c0ea51dad-part15', 'scsi-SQEMU_QEMU_HARDDISK_bc584711-153a-497e-a318-857c0ea51dad-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bc584711-153a-497e-a318-857c0ea51dad-part16', 'scsi-SQEMU_QEMU_HARDDISK_bc584711-153a-497e-a318-857c0ea51dad-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-02 00:56:56.950250 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-02-00-03-29-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-02 00:56:56.950269 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:56.950280 | orchestrator | 2026-01-02 00:56:56.950292 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-01-02 00:56:56.950303 | orchestrator | Friday 02 January 2026 00:46:36 +0000 (0:00:01.522) 0:00:30.471 ******** 2026-01-02 00:56:56.950316 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--fa5ccc98--5ec0--5843--b525--cc12dffb9804-osd--block--fa5ccc98--5ec0--5843--b525--cc12dffb9804', 'dm-uuid-LVM-oDCsQFqAfSa6dNekR7EBkGs45lHB6rEjxEg56fF9bwXURWTj0WU6ut4LrqAmuF07'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:56:56.950334 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--1e1b73ff--0d48--5f4d--91db--a8c1f08fc0ce-osd--block--1e1b73ff--0d48--5f4d--91db--a8c1f08fc0ce', 'dm-uuid-LVM-rgPLZ8vXO2nPfvKWTpklEG3SJiaYe7YGxF8ENWvpzmak4GVoCkqMJeuWh4TUAQDw'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:56:56.950347 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:56:56.950359 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:56:56.950370 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:56:56.950395 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:56:56.950408 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--319da19b--b53c--570d--92cc--c377bf830026-osd--block--319da19b--b53c--570d--92cc--c377bf830026', 'dm-uuid-LVM-ToydGqMz0NdFJYSFD2nnthvxr0L1N1tYjsqzGrrOyxGznwoThZWKqX3aNqfblJT5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:56:56.950419 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:56:56.950436 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--aabdb1ab--3cea--5cae--90fa--5f0cfaabc1a0-osd--block--aabdb1ab--3cea--5cae--90fa--5f0cfaabc1a0', 'dm-uuid-LVM-N7La7drEyxNXHeLll3TwIuNRGF3K0bJub4Af73ag0HaEEuIXfs3i8QX2i65zWrmU'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:56:56.950447 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:56:56.950459 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:56:56.950484 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:56:56.950496 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:56:56.950508 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:56:56.950523 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:56:56.950599 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:56:56.950612 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:56:56.950624 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:56:56.950651 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:56:56.950671 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_305350a8-4399-44d3-b2ea-bbf7b9eb7f90', 'scsi-SQEMU_QEMU_HARDDISK_305350a8-4399-44d3-b2ea-bbf7b9eb7f90'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_305350a8-4399-44d3-b2ea-bbf7b9eb7f90-part1', 'scsi-SQEMU_QEMU_HARDDISK_305350a8-4399-44d3-b2ea-bbf7b9eb7f90-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_305350a8-4399-44d3-b2ea-bbf7b9eb7f90-part14', 'scsi-SQEMU_QEMU_HARDDISK_305350a8-4399-44d3-b2ea-bbf7b9eb7f90-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_305350a8-4399-44d3-b2ea-bbf7b9eb7f90-part15', 'scsi-SQEMU_QEMU_HARDDISK_305350a8-4399-44d3-b2ea-bbf7b9eb7f90-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_305350a8-4399-44d3-b2ea-bbf7b9eb7f90-part16', 'scsi-SQEMU_QEMU_HARDDISK_305350a8-4399-44d3-b2ea-bbf7b9eb7f90-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:56:56.950685 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--fa5ccc98--5ec0--5843--b525--cc12dffb9804-osd--block--fa5ccc98--5ec0--5843--b525--cc12dffb9804'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-j6yrGS-2HWP-4VVF-va30-HDvZ-1RQB-VvRL68', 'scsi-0QEMU_QEMU_HARDDISK_610525bf-123e-48f5-8f72-a088231f73d4', 'scsi-SQEMU_QEMU_HARDDISK_610525bf-123e-48f5-8f72-a088231f73d4'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:56:56.951813 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:56:56.951868 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--1e1b73ff--0d48--5f4d--91db--a8c1f08fc0ce-osd--block--1e1b73ff--0d48--5f4d--91db--a8c1f08fc0ce'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-yzDxoC-avzk-Rjpo-kCJw-Mmt6-wfd1-UiS9Nm', 'scsi-0QEMU_QEMU_HARDDISK_d0e027c6-7483-4a58-a550-b5020c348e91', 'scsi-SQEMU_QEMU_HARDDISK_d0e027c6-7483-4a58-a550-b5020c348e91'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:56:56.951889 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_33c14286-c543-4ffd-bb9f-b1db90f604b2', 'scsi-SQEMU_QEMU_HARDDISK_33c14286-c543-4ffd-bb9f-b1db90f604b2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_33c14286-c543-4ffd-bb9f-b1db90f604b2-part1', 'scsi-SQEMU_QEMU_HARDDISK_33c14286-c543-4ffd-bb9f-b1db90f604b2-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_33c14286-c543-4ffd-bb9f-b1db90f604b2-part14', 'scsi-SQEMU_QEMU_HARDDISK_33c14286-c543-4ffd-bb9f-b1db90f604b2-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_33c14286-c543-4ffd-bb9f-b1db90f604b2-part15', 'scsi-SQEMU_QEMU_HARDDISK_33c14286-c543-4ffd-bb9f-b1db90f604b2-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_33c14286-c543-4ffd-bb9f-b1db90f604b2-part16', 'scsi-SQEMU_QEMU_HARDDISK_33c14286-c543-4ffd-bb9f-b1db90f604b2-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:56:56.951986 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_88e6ca38-e9bc-414f-be79-2564fe6ee507', 'scsi-SQEMU_QEMU_HARDDISK_88e6ca38-e9bc-414f-be79-2564fe6ee507'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:56:56.952003 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--319da19b--b53c--570d--92cc--c377bf830026-osd--block--319da19b--b53c--570d--92cc--c377bf830026'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-fSKb5S-Nu4n-cIZx-pD2I-gqqM-M0Nc-VOHToN', 'scsi-0QEMU_QEMU_HARDDISK_a863269e-8a4c-456a-8159-1ce463f39daf', 'scsi-SQEMU_QEMU_HARDDISK_a863269e-8a4c-456a-8159-1ce463f39daf'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:56:56.952033 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--aabdb1ab--3cea--5cae--90fa--5f0cfaabc1a0-osd--block--aabdb1ab--3cea--5cae--90fa--5f0cfaabc1a0'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-AVPnHs-1Dx7-4kF9-CcXu-Zlii-dAN1-E780FS', 'scsi-0QEMU_QEMU_HARDDISK_2fd5b446-fd37-4cff-9553-5df2f9404005', 'scsi-SQEMU_QEMU_HARDDISK_2fd5b446-fd37-4cff-9553-5df2f9404005'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:56:56.952045 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-02-00-03-27-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:56:56.952056 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1ff5c3cd-25d4-4cde-ba93-f5b1aecf565f', 'scsi-SQEMU_QEMU_HARDDISK_1ff5c3cd-25d4-4cde-ba93-f5b1aecf565f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:56:56.952140 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-02-00-03-33-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:56:56.952155 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--804dd052--7dd8--5ffa--9f76--70ebd20e36f7-osd--block--804dd052--7dd8--5ffa--9f76--70ebd20e36f7', 'dm-uuid-LVM-4qmPLn1HPIxZ6ZQiaCj89Um5tNbz0sJOm6PhWUzXHFPQIjgyz4hhCkw7KRfcZnKN'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:56:56.952183 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8699efe3--2ea7--5359--bcef--4eac218b02a9-osd--block--8699efe3--2ea7--5359--bcef--4eac218b02a9', 'dm-uuid-LVM-v4tjCEpOdr47Lgc9wIGUVShY664D9DcD3Ev00n7LAoSYGXj51xMrUOA7qw9s8nOi'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:56:56.952194 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:56:56.952205 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:56:56.952222 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:56:56.952233 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:56:56.952307 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:56:56.952322 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:56:56.952343 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:56:56.952360 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:56:56.952370 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:56:56.952474 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_551f95f5-89d5-4c4c-89d2-f5559c6efa3b', 'scsi-SQEMU_QEMU_HARDDISK_551f95f5-89d5-4c4c-89d2-f5559c6efa3b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_551f95f5-89d5-4c4c-89d2-f5559c6efa3b-part1', 'scsi-SQEMU_QEMU_HARDDISK_551f95f5-89d5-4c4c-89d2-f5559c6efa3b-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_551f95f5-89d5-4c4c-89d2-f5559c6efa3b-part14', 'scsi-SQEMU_QEMU_HARDDISK_551f95f5-89d5-4c4c-89d2-f5559c6efa3b-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_551f95f5-89d5-4c4c-89d2-f5559c6efa3b-part15', 'scsi-SQEMU_QEMU_HARDDISK_551f95f5-89d5-4c4c-89d2-f5559c6efa3b-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_551f95f5-89d5-4c4c-89d2-f5559c6efa3b-part16', 'scsi-SQEMU_QEMU_HARDDISK_551f95f5-89d5-4c4c-89d2-f5559c6efa3b-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:56:56.952517 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--804dd052--7dd8--5ffa--9f76--70ebd20e36f7-osd--block--804dd052--7dd8--5ffa--9f76--70ebd20e36f7'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-bHjTGW-M7Md-dGH2-1DRX-CahX-HQsY-ZSTrcJ', 'scsi-0QEMU_QEMU_HARDDISK_26e4f97c-d63e-4b12-851b-95c853c7feee', 'scsi-SQEMU_QEMU_HARDDISK_26e4f97c-d63e-4b12-851b-95c853c7feee'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:56:56.952556 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--8699efe3--2ea7--5359--bcef--4eac218b02a9-osd--block--8699efe3--2ea7--5359--bcef--4eac218b02a9'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-YQAhjU-OSZ2-cMKq-byOK-KbNp-3kb1-CEBdqs', 'scsi-0QEMU_QEMU_HARDDISK_afdcae1f-177b-4712-b40b-94f97a828de8', 'scsi-SQEMU_QEMU_HARDDISK_afdcae1f-177b-4712-b40b-94f97a828de8'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:56:56.952574 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ec88668e-8f3f-41b2-9ae8-d99cc1bb8c9e', 'scsi-SQEMU_QEMU_HARDDISK_ec88668e-8f3f-41b2-9ae8-d99cc1bb8c9e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:56:56.952670 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-02-00-03-36-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:56:56.952686 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:56:56.952711 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:56:56.952727 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:56:56.952738 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:56:56.952756 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:56:56.952766 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:56:56.952776 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:56:56.952853 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:56:56.952868 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:56:56.952901 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0e3598a5-4260-4049-9c1b-f2c6dbf7b4cc', 'scsi-SQEMU_QEMU_HARDDISK_0e3598a5-4260-4049-9c1b-f2c6dbf7b4cc'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0e3598a5-4260-4049-9c1b-f2c6dbf7b4cc-part1', 'scsi-SQEMU_QEMU_HARDDISK_0e3598a5-4260-4049-9c1b-f2c6dbf7b4cc-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0e3598a5-4260-4049-9c1b-f2c6dbf7b4cc-part14', 'scsi-SQEMU_QEMU_HARDDISK_0e3598a5-4260-4049-9c1b-f2c6dbf7b4cc-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0e3598a5-4260-4049-9c1b-f2c6dbf7b4cc-part15', 'scsi-SQEMU_QEMU_HARDDISK_0e3598a5-4260-4049-9c1b-f2c6dbf7b4cc-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0e3598a5-4260-4049-9c1b-f2c6dbf7b4cc-part16', 'scsi-SQEMU_QEMU_HARDDISK_0e3598a5-4260-4049-9c1b-f2c6dbf7b4cc-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:56:56.952982 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-02-00-03-25-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:56:56.952997 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:56:56.953008 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:56:56.953023 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:56:56.953033 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:56:56.953050 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:56:56.953060 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:56:56.953114 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:56:56.953127 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:56:56.953144 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d753ec4e-79c3-49c7-ab6d-1296ff31fb58', 'scsi-SQEMU_QEMU_HARDDISK_d753ec4e-79c3-49c7-ab6d-1296ff31fb58'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d753ec4e-79c3-49c7-ab6d-1296ff31fb58-part1', 'scsi-SQEMU_QEMU_HARDDISK_d753ec4e-79c3-49c7-ab6d-1296ff31fb58-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d753ec4e-79c3-49c7-ab6d-1296ff31fb58-part14', 'scsi-SQEMU_QEMU_HARDDISK_d753ec4e-79c3-49c7-ab6d-1296ff31fb58-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d753ec4e-79c3-49c7-ab6d-1296ff31fb58-part15', 'scsi-SQEMU_QEMU_HARDDISK_d753ec4e-79c3-49c7-ab6d-1296ff31fb58-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d753ec4e-79c3-49c7-ab6d-1296ff31fb58-part16', 'scsi-SQEMU_QEMU_HARDDISK_d753ec4e-79c3-49c7-ab6d-1296ff31fb58-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:56:56.953176 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-02-00-03-31-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:56:56.953298 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:56:56.953325 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:56.953335 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:56.953345 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:56:56.953356 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:56:56.953372 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:56:56.953390 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:56:56.953400 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:56:56.953410 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:56:56.953483 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:56:56.953498 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:56:56.953606 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bc584711-153a-497e-a318-857c0ea51dad', 'scsi-SQEMU_QEMU_HARDDISK_bc584711-153a-497e-a318-857c0ea51dad'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bc584711-153a-497e-a318-857c0ea51dad-part1', 'scsi-SQEMU_QEMU_HARDDISK_bc584711-153a-497e-a318-857c0ea51dad-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bc584711-153a-497e-a318-857c0ea51dad-part14', 'scsi-SQEMU_QEMU_HARDDISK_bc584711-153a-497e-a318-857c0ea51dad-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bc584711-153a-497e-a318-857c0ea51dad-part15', 'scsi-SQEMU_QEMU_HARDDISK_bc584711-153a-497e-a318-857c0ea51dad-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bc584711-153a-497e-a318-857c0ea51dad-part16', 'scsi-SQEMU_QEMU_HARDDISK_bc584711-153a-497e-a318-857c0ea51dad-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:56:56.953633 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-02-00-03-29-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:56:56.953643 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:56.953653 | orchestrator | 2026-01-02 00:56:56.953772 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-01-02 00:56:56.953789 | orchestrator | Friday 02 January 2026 00:46:37 +0000 (0:00:01.142) 0:00:31.613 ******** 2026-01-02 00:56:56.953799 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:56:56.953810 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:56:56.953820 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:56:56.953830 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:56:56.953839 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:56:56.953852 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:56:56.953868 | orchestrator | 2026-01-02 00:56:56.953877 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-01-02 00:56:56.953885 | orchestrator | Friday 02 January 2026 00:46:38 +0000 (0:00:01.024) 0:00:32.638 ******** 2026-01-02 00:56:56.953893 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:56:56.953901 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:56:56.953908 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:56:56.953916 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:56:56.953924 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:56:56.953931 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:56:56.953939 | orchestrator | 2026-01-02 00:56:56.953947 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-01-02 00:56:56.953955 | orchestrator | Friday 02 January 2026 00:46:39 +0000 (0:00:00.830) 0:00:33.469 ******** 2026-01-02 00:56:56.953962 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:56:56.953970 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:56:56.953978 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:56:56.953993 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:56.954001 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:56.954009 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:56.954056 | orchestrator | 2026-01-02 00:56:56.954080 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-01-02 00:56:56.954089 | orchestrator | Friday 02 January 2026 00:46:40 +0000 (0:00:00.782) 0:00:34.251 ******** 2026-01-02 00:56:56.954097 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:56:56.954104 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:56:56.954112 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:56:56.954120 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:56.954127 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:56.954135 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:56.954143 | orchestrator | 2026-01-02 00:56:56.954151 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-01-02 00:56:56.954159 | orchestrator | Friday 02 January 2026 00:46:41 +0000 (0:00:01.167) 0:00:35.418 ******** 2026-01-02 00:56:56.954167 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:56:56.954174 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:56.954182 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:56.954190 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:56:56.954198 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:56.954205 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:56:56.954213 | orchestrator | 2026-01-02 00:56:56.954221 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-01-02 00:56:56.954229 | orchestrator | Friday 02 January 2026 00:46:42 +0000 (0:00:00.863) 0:00:36.281 ******** 2026-01-02 00:56:56.954237 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:56:56.954245 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:56:56.954253 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:56:56.954260 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:56.954268 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:56.954276 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:56.954284 | orchestrator | 2026-01-02 00:56:56.954292 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-01-02 00:56:56.954300 | orchestrator | Friday 02 January 2026 00:46:43 +0000 (0:00:00.816) 0:00:37.098 ******** 2026-01-02 00:56:56.954308 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-01-02 00:56:56.954316 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-01-02 00:56:56.954324 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-01-02 00:56:56.954332 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-01-02 00:56:56.954340 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-01-02 00:56:56.954347 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-01-02 00:56:56.954435 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-01-02 00:56:56.954456 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-01-02 00:56:56.954466 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-01-02 00:56:56.954475 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-01-02 00:56:56.954484 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-01-02 00:56:56.954493 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-01-02 00:56:56.954502 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-01-02 00:56:56.954511 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-01-02 00:56:56.954521 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-01-02 00:56:56.954551 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-01-02 00:56:56.954565 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-01-02 00:56:56.954579 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-01-02 00:56:56.954594 | orchestrator | 2026-01-02 00:56:56.954607 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-01-02 00:56:56.954629 | orchestrator | Friday 02 January 2026 00:46:46 +0000 (0:00:03.431) 0:00:40.530 ******** 2026-01-02 00:56:56.954637 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-01-02 00:56:56.954645 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-01-02 00:56:56.954653 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-01-02 00:56:56.954661 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:56:56.954669 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-01-02 00:56:56.954677 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-01-02 00:56:56.954685 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-01-02 00:56:56.954693 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:56:56.954701 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-01-02 00:56:56.954750 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-01-02 00:56:56.954760 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-01-02 00:56:56.954768 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-01-02 00:56:56.954776 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-01-02 00:56:56.954784 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-01-02 00:56:56.954792 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-01-02 00:56:56.954799 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-01-02 00:56:56.954807 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-01-02 00:56:56.954815 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:56.954823 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:56.954831 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:56:56.954838 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-01-02 00:56:56.954846 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-01-02 00:56:56.954854 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-01-02 00:56:56.954862 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:56.954870 | orchestrator | 2026-01-02 00:56:56.954877 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-01-02 00:56:56.954885 | orchestrator | Friday 02 January 2026 00:46:47 +0000 (0:00:00.697) 0:00:41.227 ******** 2026-01-02 00:56:56.954893 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:56.954901 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:56.954908 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:56.954917 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-02 00:56:56.954926 | orchestrator | 2026-01-02 00:56:56.954934 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-01-02 00:56:56.954942 | orchestrator | Friday 02 January 2026 00:46:48 +0000 (0:00:01.412) 0:00:42.640 ******** 2026-01-02 00:56:56.954950 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:56:56.954958 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:56:56.954966 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:56:56.954974 | orchestrator | 2026-01-02 00:56:56.954982 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-01-02 00:56:56.954990 | orchestrator | Friday 02 January 2026 00:46:48 +0000 (0:00:00.365) 0:00:43.006 ******** 2026-01-02 00:56:56.954997 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:56:56.955005 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:56:56.955019 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:56:56.955027 | orchestrator | 2026-01-02 00:56:56.955035 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-01-02 00:56:56.955043 | orchestrator | Friday 02 January 2026 00:46:49 +0000 (0:00:00.627) 0:00:43.634 ******** 2026-01-02 00:56:56.955051 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:56:56.955064 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:56:56.955072 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:56:56.955080 | orchestrator | 2026-01-02 00:56:56.955088 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-01-02 00:56:56.955096 | orchestrator | Friday 02 January 2026 00:46:50 +0000 (0:00:00.570) 0:00:44.204 ******** 2026-01-02 00:56:56.955104 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:56:56.955112 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:56:56.955120 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:56:56.955127 | orchestrator | 2026-01-02 00:56:56.955135 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-01-02 00:56:56.955143 | orchestrator | Friday 02 January 2026 00:46:50 +0000 (0:00:00.708) 0:00:44.912 ******** 2026-01-02 00:56:56.955151 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-02 00:56:56.955159 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-02 00:56:56.955167 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-02 00:56:56.955174 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:56:56.955182 | orchestrator | 2026-01-02 00:56:56.955190 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-01-02 00:56:56.955198 | orchestrator | Friday 02 January 2026 00:46:51 +0000 (0:00:00.501) 0:00:45.413 ******** 2026-01-02 00:56:56.955206 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-02 00:56:56.955214 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-02 00:56:56.955221 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-02 00:56:56.955229 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:56:56.955237 | orchestrator | 2026-01-02 00:56:56.955245 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-01-02 00:56:56.955252 | orchestrator | Friday 02 January 2026 00:46:51 +0000 (0:00:00.369) 0:00:45.783 ******** 2026-01-02 00:56:56.955260 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-02 00:56:56.955268 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-02 00:56:56.955276 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-02 00:56:56.955284 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:56:56.955292 | orchestrator | 2026-01-02 00:56:56.955300 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-01-02 00:56:56.955307 | orchestrator | Friday 02 January 2026 00:46:52 +0000 (0:00:00.378) 0:00:46.162 ******** 2026-01-02 00:56:56.955315 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:56:56.955323 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:56:56.955331 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:56:56.955338 | orchestrator | 2026-01-02 00:56:56.955346 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-01-02 00:56:56.955354 | orchestrator | Friday 02 January 2026 00:46:52 +0000 (0:00:00.645) 0:00:46.807 ******** 2026-01-02 00:56:56.955362 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-01-02 00:56:56.955370 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-01-02 00:56:56.955402 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-01-02 00:56:56.955412 | orchestrator | 2026-01-02 00:56:56.955420 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-01-02 00:56:56.955428 | orchestrator | Friday 02 January 2026 00:46:53 +0000 (0:00:01.043) 0:00:47.851 ******** 2026-01-02 00:56:56.955436 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-02 00:56:56.955444 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-02 00:56:56.955452 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-02 00:56:56.955460 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-01-02 00:56:56.955468 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-01-02 00:56:56.955475 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-01-02 00:56:56.955488 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-01-02 00:56:56.955496 | orchestrator | 2026-01-02 00:56:56.955504 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-01-02 00:56:56.955512 | orchestrator | Friday 02 January 2026 00:46:54 +0000 (0:00:00.780) 0:00:48.632 ******** 2026-01-02 00:56:56.955520 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-02 00:56:56.955545 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-02 00:56:56.955558 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-02 00:56:56.955566 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-01-02 00:56:56.955574 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-01-02 00:56:56.955582 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-01-02 00:56:56.955590 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-01-02 00:56:56.955598 | orchestrator | 2026-01-02 00:56:56.955606 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-01-02 00:56:56.955614 | orchestrator | Friday 02 January 2026 00:46:56 +0000 (0:00:02.374) 0:00:51.006 ******** 2026-01-02 00:56:56.955626 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:56:56.955635 | orchestrator | 2026-01-02 00:56:56.955643 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-01-02 00:56:56.955651 | orchestrator | Friday 02 January 2026 00:46:57 +0000 (0:00:00.970) 0:00:51.977 ******** 2026-01-02 00:56:56.955659 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:56:56.955667 | orchestrator | 2026-01-02 00:56:56.955675 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-01-02 00:56:56.955683 | orchestrator | Friday 02 January 2026 00:46:59 +0000 (0:00:01.167) 0:00:53.144 ******** 2026-01-02 00:56:56.955690 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:56:56.955698 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:56:56.955706 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:56:56.955714 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:56:56.955722 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:56:56.955733 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:56:56.955746 | orchestrator | 2026-01-02 00:56:56.955754 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-01-02 00:56:56.955762 | orchestrator | Friday 02 January 2026 00:47:00 +0000 (0:00:01.192) 0:00:54.337 ******** 2026-01-02 00:56:56.955770 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:56.955778 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:56.955786 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:56.955794 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:56:56.955801 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:56:56.955809 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:56:56.955817 | orchestrator | 2026-01-02 00:56:56.955825 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-01-02 00:56:56.955833 | orchestrator | Friday 02 January 2026 00:47:01 +0000 (0:00:00.909) 0:00:55.246 ******** 2026-01-02 00:56:56.955841 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:56.955849 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:56:56.955856 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:56:56.955864 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:56.955872 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:56:56.955880 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:56.955893 | orchestrator | 2026-01-02 00:56:56.955901 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-01-02 00:56:56.955909 | orchestrator | Friday 02 January 2026 00:47:01 +0000 (0:00:00.771) 0:00:56.018 ******** 2026-01-02 00:56:56.955917 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:56.955924 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:56.955932 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:56:56.955940 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:56.955948 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:56:56.955956 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:56:56.955963 | orchestrator | 2026-01-02 00:56:56.955971 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-01-02 00:56:56.955979 | orchestrator | Friday 02 January 2026 00:47:02 +0000 (0:00:00.782) 0:00:56.800 ******** 2026-01-02 00:56:56.955987 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:56:56.955994 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:56:56.956002 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:56:56.956010 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:56:56.956018 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:56:56.956051 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:56:56.956060 | orchestrator | 2026-01-02 00:56:56.956068 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-01-02 00:56:56.956076 | orchestrator | Friday 02 January 2026 00:47:03 +0000 (0:00:01.072) 0:00:57.873 ******** 2026-01-02 00:56:56.956084 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:56:56.956092 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:56:56.956100 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:56:56.956108 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:56.956116 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:56.956123 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:56.956131 | orchestrator | 2026-01-02 00:56:56.956139 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-01-02 00:56:56.956147 | orchestrator | Friday 02 January 2026 00:47:04 +0000 (0:00:00.504) 0:00:58.378 ******** 2026-01-02 00:56:56.956155 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:56:56.956163 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:56:56.956170 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:56:56.956178 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:56.956186 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:56.956194 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:56.956201 | orchestrator | 2026-01-02 00:56:56.956209 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-01-02 00:56:56.956217 | orchestrator | Friday 02 January 2026 00:47:04 +0000 (0:00:00.598) 0:00:58.976 ******** 2026-01-02 00:56:56.956225 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:56:56.956233 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:56:56.956240 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:56:56.956248 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:56:56.956256 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:56:56.956264 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:56:56.956271 | orchestrator | 2026-01-02 00:56:56.956279 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-01-02 00:56:56.956287 | orchestrator | Friday 02 January 2026 00:47:05 +0000 (0:00:00.975) 0:00:59.952 ******** 2026-01-02 00:56:56.956295 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:56:56.956302 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:56:56.956310 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:56:56.956318 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:56:56.956325 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:56:56.956333 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:56:56.956341 | orchestrator | 2026-01-02 00:56:56.956349 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-01-02 00:56:56.956356 | orchestrator | Friday 02 January 2026 00:47:06 +0000 (0:00:01.097) 0:01:01.049 ******** 2026-01-02 00:56:56.956364 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:56:56.956378 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:56:56.956386 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:56:56.956394 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:56.956406 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:56.956414 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:56.956422 | orchestrator | 2026-01-02 00:56:56.956429 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-01-02 00:56:56.956437 | orchestrator | Friday 02 January 2026 00:47:07 +0000 (0:00:00.556) 0:01:01.606 ******** 2026-01-02 00:56:56.956445 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:56:56.956453 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:56:56.956460 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:56:56.956468 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:56:56.956476 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:56:56.956484 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:56:56.956492 | orchestrator | 2026-01-02 00:56:56.956500 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-01-02 00:56:56.956508 | orchestrator | Friday 02 January 2026 00:47:08 +0000 (0:00:00.640) 0:01:02.246 ******** 2026-01-02 00:56:56.956515 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:56:56.956523 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:56:56.956582 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:56:56.956590 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:56.956598 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:56.956606 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:56.956614 | orchestrator | 2026-01-02 00:56:56.956622 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-01-02 00:56:56.956630 | orchestrator | Friday 02 January 2026 00:47:08 +0000 (0:00:00.494) 0:01:02.741 ******** 2026-01-02 00:56:56.956638 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:56:56.956645 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:56:56.956653 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:56:56.956661 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:56.956669 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:56.956676 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:56.956682 | orchestrator | 2026-01-02 00:56:56.956689 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-01-02 00:56:56.956696 | orchestrator | Friday 02 January 2026 00:47:09 +0000 (0:00:00.600) 0:01:03.341 ******** 2026-01-02 00:56:56.956703 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:56:56.956709 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:56:56.956716 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:56:56.956722 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:56.956729 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:56.956736 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:56.956743 | orchestrator | 2026-01-02 00:56:56.956749 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-01-02 00:56:56.956756 | orchestrator | Friday 02 January 2026 00:47:09 +0000 (0:00:00.498) 0:01:03.840 ******** 2026-01-02 00:56:56.956763 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:56:56.956769 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:56:56.956776 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:56:56.956782 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:56.956789 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:56.956796 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:56.956802 | orchestrator | 2026-01-02 00:56:56.956809 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-01-02 00:56:56.956816 | orchestrator | Friday 02 January 2026 00:47:10 +0000 (0:00:00.641) 0:01:04.481 ******** 2026-01-02 00:56:56.956823 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:56:56.956829 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:56:56.956836 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:56:56.956842 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:56.956874 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:56.956888 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:56.956895 | orchestrator | 2026-01-02 00:56:56.956902 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-01-02 00:56:56.956908 | orchestrator | Friday 02 January 2026 00:47:10 +0000 (0:00:00.504) 0:01:04.986 ******** 2026-01-02 00:56:56.956915 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:56:56.956922 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:56:56.956928 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:56:56.956935 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:56:56.956941 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:56:56.956948 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:56:56.956954 | orchestrator | 2026-01-02 00:56:56.956961 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-01-02 00:56:56.956968 | orchestrator | Friday 02 January 2026 00:47:11 +0000 (0:00:00.681) 0:01:05.668 ******** 2026-01-02 00:56:56.956974 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:56:56.956981 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:56:56.956987 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:56:56.956994 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:56:56.957000 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:56:56.957007 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:56:56.957013 | orchestrator | 2026-01-02 00:56:56.957020 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-01-02 00:56:56.957027 | orchestrator | Friday 02 January 2026 00:47:12 +0000 (0:00:00.983) 0:01:06.652 ******** 2026-01-02 00:56:56.957033 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:56:56.957040 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:56:56.957046 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:56:56.957053 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:56:56.957059 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:56:56.957066 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:56:56.957073 | orchestrator | 2026-01-02 00:56:56.957079 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-01-02 00:56:56.957086 | orchestrator | Friday 02 January 2026 00:47:14 +0000 (0:00:02.018) 0:01:08.670 ******** 2026-01-02 00:56:56.957092 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:56:56.957099 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:56:56.957106 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:56:56.957112 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:56:56.957119 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:56:56.957125 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:56:56.957132 | orchestrator | 2026-01-02 00:56:56.957138 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-01-02 00:56:56.957145 | orchestrator | Friday 02 January 2026 00:47:16 +0000 (0:00:01.470) 0:01:10.140 ******** 2026-01-02 00:56:56.957151 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:56:56.957158 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:56:56.957164 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:56:56.957175 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:56:56.957182 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:56:56.957189 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:56:56.957195 | orchestrator | 2026-01-02 00:56:56.957202 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-01-02 00:56:56.957208 | orchestrator | Friday 02 January 2026 00:47:18 +0000 (0:00:02.682) 0:01:12.823 ******** 2026-01-02 00:56:56.957215 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:56:56.957222 | orchestrator | 2026-01-02 00:56:56.957228 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-01-02 00:56:56.957235 | orchestrator | Friday 02 January 2026 00:47:19 +0000 (0:00:01.004) 0:01:13.827 ******** 2026-01-02 00:56:56.957242 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:56:56.957248 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:56:56.957259 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:56:56.957266 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:56.957272 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:56.957279 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:56.957286 | orchestrator | 2026-01-02 00:56:56.957292 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-01-02 00:56:56.957299 | orchestrator | Friday 02 January 2026 00:47:20 +0000 (0:00:00.513) 0:01:14.340 ******** 2026-01-02 00:56:56.957305 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:56:56.957312 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:56:56.957318 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:56:56.957325 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:56.957331 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:56.957338 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:56.957344 | orchestrator | 2026-01-02 00:56:56.957351 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-01-02 00:56:56.957358 | orchestrator | Friday 02 January 2026 00:47:20 +0000 (0:00:00.663) 0:01:15.004 ******** 2026-01-02 00:56:56.957364 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-01-02 00:56:56.957371 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-01-02 00:56:56.957378 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-01-02 00:56:56.957384 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-01-02 00:56:56.957391 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-01-02 00:56:56.957398 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-01-02 00:56:56.957404 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-01-02 00:56:56.957411 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-01-02 00:56:56.957417 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-01-02 00:56:56.957424 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-01-02 00:56:56.957451 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-01-02 00:56:56.957459 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-01-02 00:56:56.957466 | orchestrator | 2026-01-02 00:56:56.957473 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-01-02 00:56:56.957480 | orchestrator | Friday 02 January 2026 00:47:22 +0000 (0:00:01.357) 0:01:16.361 ******** 2026-01-02 00:56:56.957486 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:56:56.957493 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:56:56.957500 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:56:56.957507 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:56:56.957513 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:56:56.957520 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:56:56.957543 | orchestrator | 2026-01-02 00:56:56.957551 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-01-02 00:56:56.957557 | orchestrator | Friday 02 January 2026 00:47:23 +0000 (0:00:01.327) 0:01:17.689 ******** 2026-01-02 00:56:56.957564 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:56:56.957570 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:56:56.957577 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:56:56.957583 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:56.957590 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:56.957597 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:56.957603 | orchestrator | 2026-01-02 00:56:56.957610 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-01-02 00:56:56.957616 | orchestrator | Friday 02 January 2026 00:47:24 +0000 (0:00:00.555) 0:01:18.244 ******** 2026-01-02 00:56:56.957628 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:56:56.957635 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:56:56.957641 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:56:56.957648 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:56.957654 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:56.957661 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:56.957667 | orchestrator | 2026-01-02 00:56:56.957674 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-01-02 00:56:56.957681 | orchestrator | Friday 02 January 2026 00:47:24 +0000 (0:00:00.784) 0:01:19.029 ******** 2026-01-02 00:56:56.957687 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:56:56.957694 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:56:56.957701 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:56:56.957707 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:56.957714 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:56.957720 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:56.957727 | orchestrator | 2026-01-02 00:56:56.957733 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-01-02 00:56:56.957744 | orchestrator | Friday 02 January 2026 00:47:25 +0000 (0:00:00.525) 0:01:19.554 ******** 2026-01-02 00:56:56.957751 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:56:56.957758 | orchestrator | 2026-01-02 00:56:56.957764 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-01-02 00:56:56.957771 | orchestrator | Friday 02 January 2026 00:47:26 +0000 (0:00:01.027) 0:01:20.582 ******** 2026-01-02 00:56:56.957778 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:56:56.957785 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:56:56.957791 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:56:56.957798 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:56:56.957804 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:56:56.957811 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:56:56.957818 | orchestrator | 2026-01-02 00:56:56.957824 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-01-02 00:56:56.957831 | orchestrator | Friday 02 January 2026 00:48:17 +0000 (0:00:51.341) 0:02:11.923 ******** 2026-01-02 00:56:56.957838 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-01-02 00:56:56.957844 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-01-02 00:56:56.957851 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-01-02 00:56:56.957857 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:56:56.957864 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-01-02 00:56:56.957871 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-01-02 00:56:56.957878 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-01-02 00:56:56.957884 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:56:56.957891 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-01-02 00:56:56.957898 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-01-02 00:56:56.957904 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-01-02 00:56:56.957911 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:56:56.957918 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-01-02 00:56:56.957924 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2026-01-02 00:56:56.957931 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2026-01-02 00:56:56.957937 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:56.957944 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-01-02 00:56:56.957959 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2026-01-02 00:56:56.957966 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2026-01-02 00:56:56.957973 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:56.958005 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-01-02 00:56:56.958013 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2026-01-02 00:56:56.958045 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2026-01-02 00:56:56.958052 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:56.958059 | orchestrator | 2026-01-02 00:56:56.958066 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-01-02 00:56:56.958073 | orchestrator | Friday 02 January 2026 00:48:18 +0000 (0:00:00.839) 0:02:12.763 ******** 2026-01-02 00:56:56.958079 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:56:56.958086 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:56:56.958093 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:56:56.958099 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:56.958106 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:56.958113 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:56.958119 | orchestrator | 2026-01-02 00:56:56.958126 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-01-02 00:56:56.958132 | orchestrator | Friday 02 January 2026 00:48:19 +0000 (0:00:00.870) 0:02:13.634 ******** 2026-01-02 00:56:56.958139 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:56:56.958145 | orchestrator | 2026-01-02 00:56:56.958152 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-01-02 00:56:56.958159 | orchestrator | Friday 02 January 2026 00:48:19 +0000 (0:00:00.157) 0:02:13.792 ******** 2026-01-02 00:56:56.958166 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:56:56.958172 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:56:56.958179 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:56:56.958186 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:56.958192 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:56.958199 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:56.958205 | orchestrator | 2026-01-02 00:56:56.958212 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-01-02 00:56:56.958219 | orchestrator | Friday 02 January 2026 00:48:20 +0000 (0:00:00.682) 0:02:14.475 ******** 2026-01-02 00:56:56.958225 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:56:56.958232 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:56:56.958239 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:56:56.958245 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:56.958252 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:56.958258 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:56.958265 | orchestrator | 2026-01-02 00:56:56.958271 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-01-02 00:56:56.958278 | orchestrator | Friday 02 January 2026 00:48:21 +0000 (0:00:00.963) 0:02:15.439 ******** 2026-01-02 00:56:56.958285 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:56:56.958295 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:56:56.958302 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:56:56.958308 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:56.958315 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:56.958321 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:56.958328 | orchestrator | 2026-01-02 00:56:56.958335 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-01-02 00:56:56.958341 | orchestrator | Friday 02 January 2026 00:48:21 +0000 (0:00:00.566) 0:02:16.005 ******** 2026-01-02 00:56:56.958348 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:56:56.958354 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:56:56.958361 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:56:56.958373 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:56:56.958380 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:56:56.958386 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:56:56.958393 | orchestrator | 2026-01-02 00:56:56.958400 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-01-02 00:56:56.958406 | orchestrator | Friday 02 January 2026 00:48:24 +0000 (0:00:02.193) 0:02:18.199 ******** 2026-01-02 00:56:56.958413 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:56:56.958420 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:56:56.958426 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:56:56.958433 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:56:56.958439 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:56:56.958446 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:56:56.958452 | orchestrator | 2026-01-02 00:56:56.958459 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-01-02 00:56:56.958466 | orchestrator | Friday 02 January 2026 00:48:24 +0000 (0:00:00.790) 0:02:18.989 ******** 2026-01-02 00:56:56.958472 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:56:56.958480 | orchestrator | 2026-01-02 00:56:56.958487 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-01-02 00:56:56.958494 | orchestrator | Friday 02 January 2026 00:48:26 +0000 (0:00:01.105) 0:02:20.095 ******** 2026-01-02 00:56:56.958500 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:56:56.958507 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:56:56.958514 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:56:56.958520 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:56.958545 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:56.958553 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:56.958559 | orchestrator | 2026-01-02 00:56:56.958566 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-01-02 00:56:56.958573 | orchestrator | Friday 02 January 2026 00:48:26 +0000 (0:00:00.673) 0:02:20.769 ******** 2026-01-02 00:56:56.958579 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:56:56.958586 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:56:56.958593 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:56:56.958599 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:56.958606 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:56.958612 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:56.958619 | orchestrator | 2026-01-02 00:56:56.958626 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-01-02 00:56:56.958632 | orchestrator | Friday 02 January 2026 00:48:27 +0000 (0:00:00.525) 0:02:21.295 ******** 2026-01-02 00:56:56.958639 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:56:56.958645 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:56:56.958678 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:56:56.958686 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:56.958693 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:56.958699 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:56.958706 | orchestrator | 2026-01-02 00:56:56.958712 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-01-02 00:56:56.958719 | orchestrator | Friday 02 January 2026 00:48:27 +0000 (0:00:00.719) 0:02:22.015 ******** 2026-01-02 00:56:56.958726 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:56:56.958732 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:56:56.958739 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:56:56.958745 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:56.958752 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:56.958758 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:56.958765 | orchestrator | 2026-01-02 00:56:56.958772 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-01-02 00:56:56.958778 | orchestrator | Friday 02 January 2026 00:48:28 +0000 (0:00:00.574) 0:02:22.590 ******** 2026-01-02 00:56:56.958791 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:56:56.958797 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:56:56.958804 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:56:56.958810 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:56.958817 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:56.958824 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:56.958830 | orchestrator | 2026-01-02 00:56:56.958837 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-01-02 00:56:56.958843 | orchestrator | Friday 02 January 2026 00:48:29 +0000 (0:00:00.701) 0:02:23.291 ******** 2026-01-02 00:56:56.958850 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:56:56.958857 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:56:56.958863 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:56:56.958870 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:56.958876 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:56.958883 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:56.958889 | orchestrator | 2026-01-02 00:56:56.958896 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-01-02 00:56:56.958903 | orchestrator | Friday 02 January 2026 00:48:29 +0000 (0:00:00.556) 0:02:23.848 ******** 2026-01-02 00:56:56.958909 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:56:56.958916 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:56:56.958922 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:56:56.958929 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:56.958936 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:56.958942 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:56.958949 | orchestrator | 2026-01-02 00:56:56.958955 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-01-02 00:56:56.958966 | orchestrator | Friday 02 January 2026 00:48:30 +0000 (0:00:00.685) 0:02:24.533 ******** 2026-01-02 00:56:56.958973 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:56:56.958979 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:56:56.958986 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:56.958993 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:56:56.958999 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:56.959006 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:56.959012 | orchestrator | 2026-01-02 00:56:56.959019 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-01-02 00:56:56.959025 | orchestrator | Friday 02 January 2026 00:48:31 +0000 (0:00:00.575) 0:02:25.108 ******** 2026-01-02 00:56:56.959032 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:56:56.959039 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:56:56.959045 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:56:56.959052 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:56:56.959058 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:56:56.959065 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:56:56.959072 | orchestrator | 2026-01-02 00:56:56.959078 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-01-02 00:56:56.959085 | orchestrator | Friday 02 January 2026 00:48:32 +0000 (0:00:01.033) 0:02:26.142 ******** 2026-01-02 00:56:56.959092 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:56:56.959098 | orchestrator | 2026-01-02 00:56:56.959105 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-01-02 00:56:56.959112 | orchestrator | Friday 02 January 2026 00:48:33 +0000 (0:00:01.146) 0:02:27.288 ******** 2026-01-02 00:56:56.959118 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2026-01-02 00:56:56.959125 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2026-01-02 00:56:56.959132 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2026-01-02 00:56:56.959138 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2026-01-02 00:56:56.959145 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2026-01-02 00:56:56.959156 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2026-01-02 00:56:56.959162 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2026-01-02 00:56:56.959169 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2026-01-02 00:56:56.959176 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2026-01-02 00:56:56.959182 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2026-01-02 00:56:56.959189 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2026-01-02 00:56:56.959196 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-01-02 00:56:56.959202 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-01-02 00:56:56.959209 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2026-01-02 00:56:56.959216 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-01-02 00:56:56.959222 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2026-01-02 00:56:56.959229 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2026-01-02 00:56:56.959236 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-01-02 00:56:56.959264 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-01-02 00:56:56.959272 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2026-01-02 00:56:56.959278 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-01-02 00:56:56.959285 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2026-01-02 00:56:56.959291 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2026-01-02 00:56:56.959298 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-01-02 00:56:56.959304 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-01-02 00:56:56.959311 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-01-02 00:56:56.959317 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2026-01-02 00:56:56.959324 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2026-01-02 00:56:56.959330 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2026-01-02 00:56:56.959337 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-01-02 00:56:56.959343 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-01-02 00:56:56.959350 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-01-02 00:56:56.959356 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2026-01-02 00:56:56.959363 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2026-01-02 00:56:56.959369 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-01-02 00:56:56.959376 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2026-01-02 00:56:56.959383 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-01-02 00:56:56.959389 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-01-02 00:56:56.959396 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2026-01-02 00:56:56.959402 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2026-01-02 00:56:56.959409 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-01-02 00:56:56.959415 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2026-01-02 00:56:56.959422 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-01-02 00:56:56.959428 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2026-01-02 00:56:56.959435 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-01-02 00:56:56.959441 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-01-02 00:56:56.959451 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2026-01-02 00:56:56.959458 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-01-02 00:56:56.959469 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2026-01-02 00:56:56.959476 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-01-02 00:56:56.959482 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-01-02 00:56:56.959489 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2026-01-02 00:56:56.959495 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2026-01-02 00:56:56.959502 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-01-02 00:56:56.959509 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2026-01-02 00:56:56.959515 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-01-02 00:56:56.959522 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-01-02 00:56:56.959546 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2026-01-02 00:56:56.959557 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2026-01-02 00:56:56.959569 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-01-02 00:56:56.959579 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2026-01-02 00:56:56.959590 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-01-02 00:56:56.959599 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-01-02 00:56:56.959606 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2026-01-02 00:56:56.959613 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-01-02 00:56:56.959619 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2026-01-02 00:56:56.959626 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2026-01-02 00:56:56.959632 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-01-02 00:56:56.959639 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-01-02 00:56:56.959645 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-01-02 00:56:56.959652 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2026-01-02 00:56:56.959658 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2026-01-02 00:56:56.959665 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-01-02 00:56:56.959671 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2026-01-02 00:56:56.959678 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-01-02 00:56:56.959684 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2026-01-02 00:56:56.959714 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-01-02 00:56:56.959722 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2026-01-02 00:56:56.959729 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2026-01-02 00:56:56.959735 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2026-01-02 00:56:56.959742 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-01-02 00:56:56.959748 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-01-02 00:56:56.959755 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2026-01-02 00:56:56.959761 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2026-01-02 00:56:56.959768 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-01-02 00:56:56.959775 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2026-01-02 00:56:56.959781 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2026-01-02 00:56:56.959788 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2026-01-02 00:56:56.959800 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2026-01-02 00:56:56.959807 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-01-02 00:56:56.959814 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2026-01-02 00:56:56.959820 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2026-01-02 00:56:56.959827 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2026-01-02 00:56:56.959833 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2026-01-02 00:56:56.959840 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2026-01-02 00:56:56.959846 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2026-01-02 00:56:56.959853 | orchestrator | 2026-01-02 00:56:56.959859 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-01-02 00:56:56.959866 | orchestrator | Friday 02 January 2026 00:48:40 +0000 (0:00:07.071) 0:02:34.359 ******** 2026-01-02 00:56:56.959873 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:56.959879 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:56.959886 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:56.959892 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-4, testbed-node-3, testbed-node-5 2026-01-02 00:56:56.959899 | orchestrator | 2026-01-02 00:56:56.959906 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-01-02 00:56:56.959916 | orchestrator | Friday 02 January 2026 00:48:41 +0000 (0:00:00.852) 0:02:35.212 ******** 2026-01-02 00:56:56.959923 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-01-02 00:56:56.959930 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-01-02 00:56:56.959937 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-01-02 00:56:56.959944 | orchestrator | 2026-01-02 00:56:56.959950 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-01-02 00:56:56.959957 | orchestrator | Friday 02 January 2026 00:48:42 +0000 (0:00:00.930) 0:02:36.142 ******** 2026-01-02 00:56:56.959964 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-01-02 00:56:56.959970 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-01-02 00:56:56.959977 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-01-02 00:56:56.959984 | orchestrator | 2026-01-02 00:56:56.959990 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-01-02 00:56:56.959997 | orchestrator | Friday 02 January 2026 00:48:43 +0000 (0:00:01.405) 0:02:37.547 ******** 2026-01-02 00:56:56.960004 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:56:56.960010 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:56:56.960017 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:56:56.960023 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:56.960030 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:56.960037 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:56.960043 | orchestrator | 2026-01-02 00:56:56.960050 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-01-02 00:56:56.960056 | orchestrator | Friday 02 January 2026 00:48:44 +0000 (0:00:00.777) 0:02:38.325 ******** 2026-01-02 00:56:56.960063 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:56:56.960069 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:56:56.960076 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:56:56.960083 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:56.960089 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:56.960100 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:56.960107 | orchestrator | 2026-01-02 00:56:56.960114 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-01-02 00:56:56.960120 | orchestrator | Friday 02 January 2026 00:48:45 +0000 (0:00:00.733) 0:02:39.059 ******** 2026-01-02 00:56:56.960127 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:56:56.960134 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:56:56.960140 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:56:56.960147 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:56.960153 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:56.960160 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:56.960166 | orchestrator | 2026-01-02 00:56:56.960194 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-01-02 00:56:56.960202 | orchestrator | Friday 02 January 2026 00:48:45 +0000 (0:00:00.550) 0:02:39.609 ******** 2026-01-02 00:56:56.960208 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:56:56.960215 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:56:56.960222 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:56:56.960228 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:56.960235 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:56.960241 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:56.960248 | orchestrator | 2026-01-02 00:56:56.960254 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-01-02 00:56:56.960261 | orchestrator | Friday 02 January 2026 00:48:46 +0000 (0:00:00.728) 0:02:40.337 ******** 2026-01-02 00:56:56.960268 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:56:56.960274 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:56:56.960280 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:56:56.960287 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:56.960293 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:56.960300 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:56.960307 | orchestrator | 2026-01-02 00:56:56.960313 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-01-02 00:56:56.960320 | orchestrator | Friday 02 January 2026 00:48:46 +0000 (0:00:00.536) 0:02:40.874 ******** 2026-01-02 00:56:56.960327 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:56:56.960333 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:56:56.960340 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:56:56.960346 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:56.960353 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:56.960359 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:56.960366 | orchestrator | 2026-01-02 00:56:56.960372 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-01-02 00:56:56.960379 | orchestrator | Friday 02 January 2026 00:48:47 +0000 (0:00:00.561) 0:02:41.436 ******** 2026-01-02 00:56:56.960386 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:56:56.960392 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:56:56.960399 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:56:56.960405 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:56.960412 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:56.960418 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:56.960425 | orchestrator | 2026-01-02 00:56:56.960431 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-01-02 00:56:56.960438 | orchestrator | Friday 02 January 2026 00:48:48 +0000 (0:00:00.669) 0:02:42.106 ******** 2026-01-02 00:56:56.960445 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:56:56.960451 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:56:56.960462 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:56:56.960468 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:56.960475 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:56.960481 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:56.960488 | orchestrator | 2026-01-02 00:56:56.960501 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-01-02 00:56:56.960507 | orchestrator | Friday 02 January 2026 00:48:48 +0000 (0:00:00.755) 0:02:42.861 ******** 2026-01-02 00:56:56.960514 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:56.960521 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:56.960547 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:56.960555 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:56:56.960561 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:56:56.960568 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:56:56.960575 | orchestrator | 2026-01-02 00:56:56.960581 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-01-02 00:56:56.960588 | orchestrator | Friday 02 January 2026 00:48:51 +0000 (0:00:02.793) 0:02:45.655 ******** 2026-01-02 00:56:56.960595 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:56:56.960601 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:56:56.960608 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:56:56.960614 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:56.960621 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:56.960627 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:56.960634 | orchestrator | 2026-01-02 00:56:56.960641 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-01-02 00:56:56.960647 | orchestrator | Friday 02 January 2026 00:48:52 +0000 (0:00:00.643) 0:02:46.298 ******** 2026-01-02 00:56:56.960654 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:56:56.960660 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:56:56.960667 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:56:56.960674 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:56.960680 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:56.960687 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:56.960693 | orchestrator | 2026-01-02 00:56:56.960700 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-01-02 00:56:56.960707 | orchestrator | Friday 02 January 2026 00:48:52 +0000 (0:00:00.629) 0:02:46.928 ******** 2026-01-02 00:56:56.960713 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:56:56.960720 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:56:56.960726 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:56:56.960733 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:56.960739 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:56.960746 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:56.960753 | orchestrator | 2026-01-02 00:56:56.960759 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-01-02 00:56:56.960766 | orchestrator | Friday 02 January 2026 00:48:53 +0000 (0:00:00.855) 0:02:47.784 ******** 2026-01-02 00:56:56.960773 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-01-02 00:56:56.960779 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-01-02 00:56:56.960786 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-01-02 00:56:56.960793 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:56.960821 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:56.960829 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:56.960836 | orchestrator | 2026-01-02 00:56:56.960842 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-01-02 00:56:56.960849 | orchestrator | Friday 02 January 2026 00:48:54 +0000 (0:00:00.526) 0:02:48.311 ******** 2026-01-02 00:56:56.960857 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2026-01-02 00:56:56.960866 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2026-01-02 00:56:56.960879 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2026-01-02 00:56:56.960887 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2026-01-02 00:56:56.960893 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:56:56.960904 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2026-01-02 00:56:56.960911 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2026-01-02 00:56:56.960918 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:56:56.960925 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:56:56.960931 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:56.960938 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:56.960944 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:56.960951 | orchestrator | 2026-01-02 00:56:56.960957 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-01-02 00:56:56.960964 | orchestrator | Friday 02 January 2026 00:48:54 +0000 (0:00:00.646) 0:02:48.957 ******** 2026-01-02 00:56:56.960971 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:56:56.960977 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:56:56.960984 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:56:56.960990 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:56.960997 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:56.961003 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:56.961010 | orchestrator | 2026-01-02 00:56:56.961016 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-01-02 00:56:56.961023 | orchestrator | Friday 02 January 2026 00:48:55 +0000 (0:00:00.536) 0:02:49.493 ******** 2026-01-02 00:56:56.961030 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:56:56.961036 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:56:56.961043 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:56:56.961049 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:56.961056 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:56.961062 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:56.961069 | orchestrator | 2026-01-02 00:56:56.961075 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-01-02 00:56:56.961082 | orchestrator | Friday 02 January 2026 00:48:56 +0000 (0:00:00.682) 0:02:50.176 ******** 2026-01-02 00:56:56.961089 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:56:56.961095 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:56:56.961102 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:56:56.961108 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:56.961115 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:56.961121 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:56.961135 | orchestrator | 2026-01-02 00:56:56.961142 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-01-02 00:56:56.961148 | orchestrator | Friday 02 January 2026 00:48:56 +0000 (0:00:00.519) 0:02:50.696 ******** 2026-01-02 00:56:56.961155 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:56:56.961161 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:56:56.961168 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:56:56.961174 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:56.961181 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:56.961187 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:56.961194 | orchestrator | 2026-01-02 00:56:56.961201 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-01-02 00:56:56.961228 | orchestrator | Friday 02 January 2026 00:48:57 +0000 (0:00:00.676) 0:02:51.372 ******** 2026-01-02 00:56:56.961236 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:56:56.961243 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:56:56.961249 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:56:56.961256 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:56.961262 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:56.961269 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:56.961276 | orchestrator | 2026-01-02 00:56:56.961282 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-01-02 00:56:56.961289 | orchestrator | Friday 02 January 2026 00:48:57 +0000 (0:00:00.664) 0:02:52.037 ******** 2026-01-02 00:56:56.961295 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:56:56.961302 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:56:56.961309 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:56:56.961315 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:56.961322 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:56.961328 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:56.961334 | orchestrator | 2026-01-02 00:56:56.961341 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-01-02 00:56:56.961348 | orchestrator | Friday 02 January 2026 00:48:59 +0000 (0:00:01.256) 0:02:53.294 ******** 2026-01-02 00:56:56.961354 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-02 00:56:56.961361 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-02 00:56:56.961367 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-02 00:56:56.961374 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:56:56.961381 | orchestrator | 2026-01-02 00:56:56.961387 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-01-02 00:56:56.961394 | orchestrator | Friday 02 January 2026 00:48:59 +0000 (0:00:00.331) 0:02:53.626 ******** 2026-01-02 00:56:56.961400 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-02 00:56:56.961407 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-02 00:56:56.961413 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-02 00:56:56.961420 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:56:56.961426 | orchestrator | 2026-01-02 00:56:56.961433 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-01-02 00:56:56.961439 | orchestrator | Friday 02 January 2026 00:48:59 +0000 (0:00:00.310) 0:02:53.936 ******** 2026-01-02 00:56:56.961446 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-02 00:56:56.961452 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-02 00:56:56.961459 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-02 00:56:56.961466 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:56:56.961472 | orchestrator | 2026-01-02 00:56:56.961483 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-01-02 00:56:56.961490 | orchestrator | Friday 02 January 2026 00:49:00 +0000 (0:00:00.347) 0:02:54.284 ******** 2026-01-02 00:56:56.961496 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:56:56.961503 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:56:56.961514 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:56:56.961520 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:56.961568 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:56.961577 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:56.961583 | orchestrator | 2026-01-02 00:56:56.961590 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-01-02 00:56:56.961597 | orchestrator | Friday 02 January 2026 00:49:00 +0000 (0:00:00.557) 0:02:54.841 ******** 2026-01-02 00:56:56.961603 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-01-02 00:56:56.961610 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-01-02 00:56:56.961616 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-01-02 00:56:56.961623 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-01-02 00:56:56.961630 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-01-02 00:56:56.961636 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:56.961643 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:56.961650 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-01-02 00:56:56.961656 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:56.961663 | orchestrator | 2026-01-02 00:56:56.961669 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-01-02 00:56:56.961676 | orchestrator | Friday 02 January 2026 00:49:02 +0000 (0:00:02.114) 0:02:56.956 ******** 2026-01-02 00:56:56.961683 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:56:56.961689 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:56:56.961696 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:56:56.961702 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:56:56.961709 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:56:56.961715 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:56:56.961722 | orchestrator | 2026-01-02 00:56:56.961729 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-01-02 00:56:56.961736 | orchestrator | Friday 02 January 2026 00:49:06 +0000 (0:00:03.169) 0:03:00.125 ******** 2026-01-02 00:56:56.961742 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:56:56.961749 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:56:56.961755 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:56:56.961762 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:56:56.961768 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:56:56.961775 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:56:56.961782 | orchestrator | 2026-01-02 00:56:56.961788 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-01-02 00:56:56.961795 | orchestrator | Friday 02 January 2026 00:49:07 +0000 (0:00:01.111) 0:03:01.237 ******** 2026-01-02 00:56:56.961802 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:56:56.961808 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:56:56.961815 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:56:56.961821 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:56:56.961828 | orchestrator | 2026-01-02 00:56:56.961835 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-01-02 00:56:56.961865 | orchestrator | Friday 02 January 2026 00:49:08 +0000 (0:00:00.910) 0:03:02.147 ******** 2026-01-02 00:56:56.961873 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:56:56.961879 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:56:56.961886 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:56:56.961893 | orchestrator | 2026-01-02 00:56:56.961900 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-01-02 00:56:56.961906 | orchestrator | Friday 02 January 2026 00:49:08 +0000 (0:00:00.286) 0:03:02.434 ******** 2026-01-02 00:56:56.961913 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:56:56.961920 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:56:56.961926 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:56:56.961933 | orchestrator | 2026-01-02 00:56:56.961940 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-01-02 00:56:56.961953 | orchestrator | Friday 02 January 2026 00:49:09 +0000 (0:00:01.210) 0:03:03.644 ******** 2026-01-02 00:56:56.961959 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-01-02 00:56:56.961966 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-01-02 00:56:56.961972 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-01-02 00:56:56.961979 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:56.961986 | orchestrator | 2026-01-02 00:56:56.961992 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-01-02 00:56:56.961999 | orchestrator | Friday 02 January 2026 00:49:10 +0000 (0:00:00.985) 0:03:04.629 ******** 2026-01-02 00:56:56.962006 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:56:56.962012 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:56:56.962041 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:56:56.962047 | orchestrator | 2026-01-02 00:56:56.962054 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-01-02 00:56:56.962060 | orchestrator | Friday 02 January 2026 00:49:10 +0000 (0:00:00.292) 0:03:04.922 ******** 2026-01-02 00:56:56.962066 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:56.962072 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:56.962078 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:56.962084 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-02 00:56:56.962090 | orchestrator | 2026-01-02 00:56:56.962097 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-01-02 00:56:56.962103 | orchestrator | Friday 02 January 2026 00:49:11 +0000 (0:00:00.770) 0:03:05.693 ******** 2026-01-02 00:56:56.962109 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-02 00:56:56.962115 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-02 00:56:56.962121 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-02 00:56:56.962127 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:56:56.962133 | orchestrator | 2026-01-02 00:56:56.962143 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-01-02 00:56:56.962150 | orchestrator | Friday 02 January 2026 00:49:12 +0000 (0:00:00.576) 0:03:06.270 ******** 2026-01-02 00:56:56.962156 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:56:56.962162 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:56:56.962168 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:56:56.962174 | orchestrator | 2026-01-02 00:56:56.962180 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-01-02 00:56:56.962186 | orchestrator | Friday 02 January 2026 00:49:12 +0000 (0:00:00.262) 0:03:06.532 ******** 2026-01-02 00:56:56.962193 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:56:56.962199 | orchestrator | 2026-01-02 00:56:56.962205 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-01-02 00:56:56.962211 | orchestrator | Friday 02 January 2026 00:49:12 +0000 (0:00:00.162) 0:03:06.694 ******** 2026-01-02 00:56:56.962217 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:56:56.962224 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:56:56.962230 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:56:56.962236 | orchestrator | 2026-01-02 00:56:56.962242 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-01-02 00:56:56.962248 | orchestrator | Friday 02 January 2026 00:49:12 +0000 (0:00:00.236) 0:03:06.931 ******** 2026-01-02 00:56:56.962254 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:56:56.962260 | orchestrator | 2026-01-02 00:56:56.962267 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-01-02 00:56:56.962273 | orchestrator | Friday 02 January 2026 00:49:13 +0000 (0:00:00.162) 0:03:07.093 ******** 2026-01-02 00:56:56.962279 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:56:56.962285 | orchestrator | 2026-01-02 00:56:56.962291 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-01-02 00:56:56.962306 | orchestrator | Friday 02 January 2026 00:49:13 +0000 (0:00:00.152) 0:03:07.246 ******** 2026-01-02 00:56:56.962312 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:56:56.962318 | orchestrator | 2026-01-02 00:56:56.962324 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-01-02 00:56:56.962330 | orchestrator | Friday 02 January 2026 00:49:13 +0000 (0:00:00.096) 0:03:07.343 ******** 2026-01-02 00:56:56.962336 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:56:56.962342 | orchestrator | 2026-01-02 00:56:56.962349 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-01-02 00:56:56.962355 | orchestrator | Friday 02 January 2026 00:49:13 +0000 (0:00:00.168) 0:03:07.511 ******** 2026-01-02 00:56:56.962361 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:56:56.962367 | orchestrator | 2026-01-02 00:56:56.962373 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-01-02 00:56:56.962379 | orchestrator | Friday 02 January 2026 00:49:13 +0000 (0:00:00.539) 0:03:08.051 ******** 2026-01-02 00:56:56.962386 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-02 00:56:56.962392 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-02 00:56:56.962398 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-02 00:56:56.962404 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:56:56.962410 | orchestrator | 2026-01-02 00:56:56.962417 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-01-02 00:56:56.962443 | orchestrator | Friday 02 January 2026 00:49:14 +0000 (0:00:00.299) 0:03:08.350 ******** 2026-01-02 00:56:56.962451 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:56:56.962457 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:56:56.962463 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:56:56.962469 | orchestrator | 2026-01-02 00:56:56.962475 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-01-02 00:56:56.962482 | orchestrator | Friday 02 January 2026 00:49:14 +0000 (0:00:00.267) 0:03:08.618 ******** 2026-01-02 00:56:56.962488 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:56:56.962494 | orchestrator | 2026-01-02 00:56:56.962500 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-01-02 00:56:56.962506 | orchestrator | Friday 02 January 2026 00:49:14 +0000 (0:00:00.187) 0:03:08.805 ******** 2026-01-02 00:56:56.962512 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:56:56.962518 | orchestrator | 2026-01-02 00:56:56.962524 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-01-02 00:56:56.962548 | orchestrator | Friday 02 January 2026 00:49:14 +0000 (0:00:00.191) 0:03:08.996 ******** 2026-01-02 00:56:56.962555 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:56.962561 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:56.962567 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:56.962573 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-02 00:56:56.962580 | orchestrator | 2026-01-02 00:56:56.962586 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-01-02 00:56:56.962592 | orchestrator | Friday 02 January 2026 00:49:15 +0000 (0:00:00.930) 0:03:09.927 ******** 2026-01-02 00:56:56.962598 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:56:56.962604 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:56:56.962611 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:56:56.962617 | orchestrator | 2026-01-02 00:56:56.962623 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-01-02 00:56:56.962629 | orchestrator | Friday 02 January 2026 00:49:16 +0000 (0:00:00.288) 0:03:10.216 ******** 2026-01-02 00:56:56.962635 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:56:56.962641 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:56:56.962647 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:56:56.962653 | orchestrator | 2026-01-02 00:56:56.962660 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-01-02 00:56:56.962671 | orchestrator | Friday 02 January 2026 00:49:17 +0000 (0:00:01.278) 0:03:11.494 ******** 2026-01-02 00:56:56.962678 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-02 00:56:56.962684 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-02 00:56:56.962694 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-02 00:56:56.962700 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:56:56.962706 | orchestrator | 2026-01-02 00:56:56.962712 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-01-02 00:56:56.962719 | orchestrator | Friday 02 January 2026 00:49:18 +0000 (0:00:00.664) 0:03:12.158 ******** 2026-01-02 00:56:56.962725 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:56:56.962731 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:56:56.962737 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:56:56.962743 | orchestrator | 2026-01-02 00:56:56.962750 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-01-02 00:56:56.962756 | orchestrator | Friday 02 January 2026 00:49:18 +0000 (0:00:00.387) 0:03:12.545 ******** 2026-01-02 00:56:56.962762 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:56.962768 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:56.962774 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:56.962781 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-02 00:56:56.962787 | orchestrator | 2026-01-02 00:56:56.962793 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-01-02 00:56:56.962799 | orchestrator | Friday 02 January 2026 00:49:19 +0000 (0:00:00.681) 0:03:13.227 ******** 2026-01-02 00:56:56.962805 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:56:56.962812 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:56:56.962818 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:56:56.962824 | orchestrator | 2026-01-02 00:56:56.962830 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-01-02 00:56:56.962836 | orchestrator | Friday 02 January 2026 00:49:19 +0000 (0:00:00.421) 0:03:13.648 ******** 2026-01-02 00:56:56.962843 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:56:56.962849 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:56:56.962855 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:56:56.962861 | orchestrator | 2026-01-02 00:56:56.962867 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-01-02 00:56:56.962873 | orchestrator | Friday 02 January 2026 00:49:20 +0000 (0:00:01.303) 0:03:14.952 ******** 2026-01-02 00:56:56.962880 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-02 00:56:56.962886 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-02 00:56:56.962892 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-02 00:56:56.962898 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:56:56.962904 | orchestrator | 2026-01-02 00:56:56.962911 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-01-02 00:56:56.962917 | orchestrator | Friday 02 January 2026 00:49:21 +0000 (0:00:00.608) 0:03:15.560 ******** 2026-01-02 00:56:56.962923 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:56:56.962929 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:56:56.962935 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:56:56.962942 | orchestrator | 2026-01-02 00:56:56.962948 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-01-02 00:56:56.962954 | orchestrator | Friday 02 January 2026 00:49:21 +0000 (0:00:00.337) 0:03:15.897 ******** 2026-01-02 00:56:56.962960 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:56:56.962967 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:56:56.962973 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:56:56.962979 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:56.962985 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:56.963012 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:56.963019 | orchestrator | 2026-01-02 00:56:56.963030 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-01-02 00:56:56.963037 | orchestrator | Friday 02 January 2026 00:49:22 +0000 (0:00:00.898) 0:03:16.796 ******** 2026-01-02 00:56:56.963043 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:56:56.963049 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:56:56.963055 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:56:56.963062 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:56:56.963068 | orchestrator | 2026-01-02 00:56:56.963074 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-01-02 00:56:56.963080 | orchestrator | Friday 02 January 2026 00:49:23 +0000 (0:00:00.866) 0:03:17.662 ******** 2026-01-02 00:56:56.963086 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:56:56.963092 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:56:56.963098 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:56:56.963104 | orchestrator | 2026-01-02 00:56:56.963110 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-01-02 00:56:56.963117 | orchestrator | Friday 02 January 2026 00:49:24 +0000 (0:00:00.500) 0:03:18.163 ******** 2026-01-02 00:56:56.963123 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:56:56.963129 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:56:56.963135 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:56:56.963141 | orchestrator | 2026-01-02 00:56:56.963147 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-01-02 00:56:56.963154 | orchestrator | Friday 02 January 2026 00:49:25 +0000 (0:00:01.215) 0:03:19.379 ******** 2026-01-02 00:56:56.963160 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-01-02 00:56:56.963166 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-01-02 00:56:56.963172 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-01-02 00:56:56.963178 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:56.963184 | orchestrator | 2026-01-02 00:56:56.963190 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-01-02 00:56:56.963196 | orchestrator | Friday 02 January 2026 00:49:25 +0000 (0:00:00.651) 0:03:20.031 ******** 2026-01-02 00:56:56.963202 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:56:56.963208 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:56:56.963214 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:56:56.963220 | orchestrator | 2026-01-02 00:56:56.963226 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2026-01-02 00:56:56.963233 | orchestrator | 2026-01-02 00:56:56.963239 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-01-02 00:56:56.963248 | orchestrator | Friday 02 January 2026 00:49:26 +0000 (0:00:00.681) 0:03:20.712 ******** 2026-01-02 00:56:56.963255 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:56:56.963261 | orchestrator | 2026-01-02 00:56:56.963267 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-01-02 00:56:56.963273 | orchestrator | Friday 02 January 2026 00:49:27 +0000 (0:00:00.813) 0:03:21.526 ******** 2026-01-02 00:56:56.963279 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:56:56.963286 | orchestrator | 2026-01-02 00:56:56.963292 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-01-02 00:56:56.963298 | orchestrator | Friday 02 January 2026 00:49:27 +0000 (0:00:00.495) 0:03:22.022 ******** 2026-01-02 00:56:56.963304 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:56:56.963310 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:56:56.963316 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:56:56.963322 | orchestrator | 2026-01-02 00:56:56.963328 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-01-02 00:56:56.963334 | orchestrator | Friday 02 January 2026 00:49:29 +0000 (0:00:01.366) 0:03:23.389 ******** 2026-01-02 00:56:56.963344 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:56.963351 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:56.963357 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:56.963363 | orchestrator | 2026-01-02 00:56:56.963369 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-01-02 00:56:56.963375 | orchestrator | Friday 02 January 2026 00:49:29 +0000 (0:00:00.278) 0:03:23.667 ******** 2026-01-02 00:56:56.963381 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:56.963387 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:56.963393 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:56.963399 | orchestrator | 2026-01-02 00:56:56.963405 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-01-02 00:56:56.963411 | orchestrator | Friday 02 January 2026 00:49:29 +0000 (0:00:00.265) 0:03:23.933 ******** 2026-01-02 00:56:56.963418 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:56.963424 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:56.963430 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:56.963436 | orchestrator | 2026-01-02 00:56:56.963442 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-01-02 00:56:56.963448 | orchestrator | Friday 02 January 2026 00:49:30 +0000 (0:00:00.245) 0:03:24.178 ******** 2026-01-02 00:56:56.963454 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:56:56.963460 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:56:56.963466 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:56:56.963472 | orchestrator | 2026-01-02 00:56:56.963478 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-01-02 00:56:56.963485 | orchestrator | Friday 02 January 2026 00:49:31 +0000 (0:00:00.895) 0:03:25.074 ******** 2026-01-02 00:56:56.963491 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:56.963497 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:56.963503 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:56.963509 | orchestrator | 2026-01-02 00:56:56.963515 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-01-02 00:56:56.963521 | orchestrator | Friday 02 January 2026 00:49:31 +0000 (0:00:00.291) 0:03:25.366 ******** 2026-01-02 00:56:56.963602 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:56.963612 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:56.963618 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:56.963624 | orchestrator | 2026-01-02 00:56:56.963630 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-01-02 00:56:56.963637 | orchestrator | Friday 02 January 2026 00:49:31 +0000 (0:00:00.273) 0:03:25.640 ******** 2026-01-02 00:56:56.963643 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:56:56.963649 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:56:56.963655 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:56:56.963661 | orchestrator | 2026-01-02 00:56:56.963667 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-01-02 00:56:56.963673 | orchestrator | Friday 02 January 2026 00:49:32 +0000 (0:00:00.740) 0:03:26.381 ******** 2026-01-02 00:56:56.963680 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:56:56.963686 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:56:56.963692 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:56:56.963698 | orchestrator | 2026-01-02 00:56:56.963704 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-01-02 00:56:56.963710 | orchestrator | Friday 02 January 2026 00:49:33 +0000 (0:00:00.746) 0:03:27.127 ******** 2026-01-02 00:56:56.963716 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:56.963722 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:56.963728 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:56.963735 | orchestrator | 2026-01-02 00:56:56.963741 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-01-02 00:56:56.963747 | orchestrator | Friday 02 January 2026 00:49:33 +0000 (0:00:00.528) 0:03:27.656 ******** 2026-01-02 00:56:56.963753 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:56:56.963765 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:56:56.963771 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:56:56.963777 | orchestrator | 2026-01-02 00:56:56.963783 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-01-02 00:56:56.963789 | orchestrator | Friday 02 January 2026 00:49:33 +0000 (0:00:00.353) 0:03:28.009 ******** 2026-01-02 00:56:56.963795 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:56.963800 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:56.963805 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:56.963811 | orchestrator | 2026-01-02 00:56:56.963816 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-01-02 00:56:56.963821 | orchestrator | Friday 02 January 2026 00:49:34 +0000 (0:00:00.347) 0:03:28.357 ******** 2026-01-02 00:56:56.963827 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:56.963832 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:56.963837 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:56.963842 | orchestrator | 2026-01-02 00:56:56.963848 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-01-02 00:56:56.963857 | orchestrator | Friday 02 January 2026 00:49:34 +0000 (0:00:00.285) 0:03:28.642 ******** 2026-01-02 00:56:56.963862 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:56.963868 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:56.963873 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:56.963878 | orchestrator | 2026-01-02 00:56:56.963884 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-01-02 00:56:56.963889 | orchestrator | Friday 02 January 2026 00:49:34 +0000 (0:00:00.357) 0:03:29.000 ******** 2026-01-02 00:56:56.963894 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:56.963899 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:56.963905 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:56.963910 | orchestrator | 2026-01-02 00:56:56.963915 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-01-02 00:56:56.963921 | orchestrator | Friday 02 January 2026 00:49:35 +0000 (0:00:00.254) 0:03:29.254 ******** 2026-01-02 00:56:56.963926 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:56.963931 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:56.963937 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:56.963942 | orchestrator | 2026-01-02 00:56:56.963947 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-01-02 00:56:56.963953 | orchestrator | Friday 02 January 2026 00:49:35 +0000 (0:00:00.214) 0:03:29.469 ******** 2026-01-02 00:56:56.963958 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:56:56.963963 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:56:56.963969 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:56:56.963974 | orchestrator | 2026-01-02 00:56:56.963979 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-01-02 00:56:56.963984 | orchestrator | Friday 02 January 2026 00:49:35 +0000 (0:00:00.249) 0:03:29.718 ******** 2026-01-02 00:56:56.963990 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:56:56.963995 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:56:56.964000 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:56:56.964006 | orchestrator | 2026-01-02 00:56:56.964011 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-01-02 00:56:56.964016 | orchestrator | Friday 02 January 2026 00:49:36 +0000 (0:00:00.400) 0:03:30.119 ******** 2026-01-02 00:56:56.964022 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:56:56.964027 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:56:56.964032 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:56:56.964038 | orchestrator | 2026-01-02 00:56:56.964043 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-01-02 00:56:56.964048 | orchestrator | Friday 02 January 2026 00:49:36 +0000 (0:00:00.396) 0:03:30.516 ******** 2026-01-02 00:56:56.964054 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:56:56.964059 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:56:56.964064 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:56:56.964075 | orchestrator | 2026-01-02 00:56:56.964081 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-01-02 00:56:56.964086 | orchestrator | Friday 02 January 2026 00:49:36 +0000 (0:00:00.256) 0:03:30.772 ******** 2026-01-02 00:56:56.964091 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:56:56.964097 | orchestrator | 2026-01-02 00:56:56.964102 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-01-02 00:56:56.964107 | orchestrator | Friday 02 January 2026 00:49:37 +0000 (0:00:00.588) 0:03:31.360 ******** 2026-01-02 00:56:56.964113 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:56.964118 | orchestrator | 2026-01-02 00:56:56.964142 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-01-02 00:56:56.964149 | orchestrator | Friday 02 January 2026 00:49:37 +0000 (0:00:00.122) 0:03:31.483 ******** 2026-01-02 00:56:56.964154 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-01-02 00:56:56.964160 | orchestrator | 2026-01-02 00:56:56.964165 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-01-02 00:56:56.964170 | orchestrator | Friday 02 January 2026 00:49:38 +0000 (0:00:00.897) 0:03:32.380 ******** 2026-01-02 00:56:56.964176 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:56:56.964181 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:56:56.964186 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:56:56.964192 | orchestrator | 2026-01-02 00:56:56.964197 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-01-02 00:56:56.964202 | orchestrator | Friday 02 January 2026 00:49:38 +0000 (0:00:00.294) 0:03:32.675 ******** 2026-01-02 00:56:56.964208 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:56:56.964213 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:56:56.964218 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:56:56.964224 | orchestrator | 2026-01-02 00:56:56.964229 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-01-02 00:56:56.964235 | orchestrator | Friday 02 January 2026 00:49:38 +0000 (0:00:00.286) 0:03:32.961 ******** 2026-01-02 00:56:56.964240 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:56:56.964245 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:56:56.964251 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:56:56.964256 | orchestrator | 2026-01-02 00:56:56.964261 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-01-02 00:56:56.964267 | orchestrator | Friday 02 January 2026 00:49:40 +0000 (0:00:01.447) 0:03:34.408 ******** 2026-01-02 00:56:56.964273 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:56:56.964278 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:56:56.964283 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:56:56.964288 | orchestrator | 2026-01-02 00:56:56.964294 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-01-02 00:56:56.964299 | orchestrator | Friday 02 January 2026 00:49:41 +0000 (0:00:00.802) 0:03:35.211 ******** 2026-01-02 00:56:56.964304 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:56:56.964310 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:56:56.964315 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:56:56.964320 | orchestrator | 2026-01-02 00:56:56.964326 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-01-02 00:56:56.964331 | orchestrator | Friday 02 January 2026 00:49:41 +0000 (0:00:00.659) 0:03:35.871 ******** 2026-01-02 00:56:56.964336 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:56:56.964342 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:56:56.964347 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:56:56.964352 | orchestrator | 2026-01-02 00:56:56.964361 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-01-02 00:56:56.964367 | orchestrator | Friday 02 January 2026 00:49:42 +0000 (0:00:00.745) 0:03:36.616 ******** 2026-01-02 00:56:56.964372 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:56:56.964378 | orchestrator | 2026-01-02 00:56:56.964383 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-01-02 00:56:56.964395 | orchestrator | Friday 02 January 2026 00:49:43 +0000 (0:00:01.320) 0:03:37.936 ******** 2026-01-02 00:56:56.964400 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:56:56.964406 | orchestrator | 2026-01-02 00:56:56.964411 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-01-02 00:56:56.964416 | orchestrator | Friday 02 January 2026 00:49:45 +0000 (0:00:01.484) 0:03:39.420 ******** 2026-01-02 00:56:56.964422 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-01-02 00:56:56.964427 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-02 00:56:56.964432 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-02 00:56:56.964438 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-02 00:56:56.964443 | orchestrator | ok: [testbed-node-1] => (item=None) 2026-01-02 00:56:56.964448 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-02 00:56:56.964454 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-02 00:56:56.964459 | orchestrator | changed: [testbed-node-0 -> {{ item }}] 2026-01-02 00:56:56.964464 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-02 00:56:56.964470 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2026-01-02 00:56:56.964475 | orchestrator | ok: [testbed-node-2] => (item=None) 2026-01-02 00:56:56.964480 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2026-01-02 00:56:56.964486 | orchestrator | 2026-01-02 00:56:56.964491 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-01-02 00:56:56.964496 | orchestrator | Friday 02 January 2026 00:49:48 +0000 (0:00:03.408) 0:03:42.829 ******** 2026-01-02 00:56:56.964501 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:56:56.964507 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:56:56.964512 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:56:56.964517 | orchestrator | 2026-01-02 00:56:56.964523 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-01-02 00:56:56.964543 | orchestrator | Friday 02 January 2026 00:49:50 +0000 (0:00:01.371) 0:03:44.200 ******** 2026-01-02 00:56:56.964548 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:56:56.964554 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:56:56.964559 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:56:56.964564 | orchestrator | 2026-01-02 00:56:56.964570 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-01-02 00:56:56.964575 | orchestrator | Friday 02 January 2026 00:49:50 +0000 (0:00:00.384) 0:03:44.584 ******** 2026-01-02 00:56:56.964580 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:56:56.964586 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:56:56.964591 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:56:56.964596 | orchestrator | 2026-01-02 00:56:56.964602 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-01-02 00:56:56.964607 | orchestrator | Friday 02 January 2026 00:49:50 +0000 (0:00:00.313) 0:03:44.898 ******** 2026-01-02 00:56:56.964613 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:56:56.964637 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:56:56.964644 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:56:56.964649 | orchestrator | 2026-01-02 00:56:56.964655 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-01-02 00:56:56.964660 | orchestrator | Friday 02 January 2026 00:49:52 +0000 (0:00:01.827) 0:03:46.726 ******** 2026-01-02 00:56:56.964665 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:56:56.964671 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:56:56.964676 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:56:56.964681 | orchestrator | 2026-01-02 00:56:56.964687 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-01-02 00:56:56.964692 | orchestrator | Friday 02 January 2026 00:49:54 +0000 (0:00:01.668) 0:03:48.395 ******** 2026-01-02 00:56:56.964697 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:56.964707 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:56.964712 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:56.964718 | orchestrator | 2026-01-02 00:56:56.964723 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-01-02 00:56:56.964728 | orchestrator | Friday 02 January 2026 00:49:54 +0000 (0:00:00.312) 0:03:48.707 ******** 2026-01-02 00:56:56.964734 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:56:56.964739 | orchestrator | 2026-01-02 00:56:56.964745 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-01-02 00:56:56.964750 | orchestrator | Friday 02 January 2026 00:49:55 +0000 (0:00:00.699) 0:03:49.406 ******** 2026-01-02 00:56:56.964755 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:56.964761 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:56.964766 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:56.964771 | orchestrator | 2026-01-02 00:56:56.964776 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-01-02 00:56:56.964782 | orchestrator | Friday 02 January 2026 00:49:55 +0000 (0:00:00.318) 0:03:49.725 ******** 2026-01-02 00:56:56.964787 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:56.964792 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:56.964798 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:56.964803 | orchestrator | 2026-01-02 00:56:56.964808 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-01-02 00:56:56.964814 | orchestrator | Friday 02 January 2026 00:49:56 +0000 (0:00:00.336) 0:03:50.062 ******** 2026-01-02 00:56:56.964819 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:56:56.964825 | orchestrator | 2026-01-02 00:56:56.964830 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-01-02 00:56:56.964839 | orchestrator | Friday 02 January 2026 00:49:56 +0000 (0:00:00.769) 0:03:50.831 ******** 2026-01-02 00:56:56.964844 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:56:56.964850 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:56:56.964855 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:56:56.964860 | orchestrator | 2026-01-02 00:56:56.964866 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-01-02 00:56:56.964871 | orchestrator | Friday 02 January 2026 00:49:59 +0000 (0:00:02.345) 0:03:53.177 ******** 2026-01-02 00:56:56.964876 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:56:56.964882 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:56:56.964887 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:56:56.964892 | orchestrator | 2026-01-02 00:56:56.964898 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-01-02 00:56:56.964903 | orchestrator | Friday 02 January 2026 00:50:00 +0000 (0:00:01.415) 0:03:54.592 ******** 2026-01-02 00:56:56.964908 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:56:56.964914 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:56:56.964919 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:56:56.964925 | orchestrator | 2026-01-02 00:56:56.964930 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-01-02 00:56:56.964935 | orchestrator | Friday 02 January 2026 00:50:02 +0000 (0:00:01.835) 0:03:56.428 ******** 2026-01-02 00:56:56.964941 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:56:56.964946 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:56:56.964951 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:56:56.964956 | orchestrator | 2026-01-02 00:56:56.964962 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-01-02 00:56:56.964967 | orchestrator | Friday 02 January 2026 00:50:04 +0000 (0:00:02.341) 0:03:58.770 ******** 2026-01-02 00:56:56.964972 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:56:56.964978 | orchestrator | 2026-01-02 00:56:56.964987 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-01-02 00:56:56.964993 | orchestrator | Friday 02 January 2026 00:50:05 +0000 (0:00:00.576) 0:03:59.346 ******** 2026-01-02 00:56:56.964998 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2026-01-02 00:56:56.965004 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:56:56.965009 | orchestrator | 2026-01-02 00:56:56.965014 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-01-02 00:56:56.965020 | orchestrator | Friday 02 January 2026 00:50:27 +0000 (0:00:22.028) 0:04:21.374 ******** 2026-01-02 00:56:56.965025 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:56:56.965031 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:56:56.965036 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:56:56.965041 | orchestrator | 2026-01-02 00:56:56.965047 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-01-02 00:56:56.965052 | orchestrator | Friday 02 January 2026 00:50:37 +0000 (0:00:10.270) 0:04:31.645 ******** 2026-01-02 00:56:56.965058 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:56.965063 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:56.965068 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:56.965074 | orchestrator | 2026-01-02 00:56:56.965079 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-01-02 00:56:56.965103 | orchestrator | Friday 02 January 2026 00:50:37 +0000 (0:00:00.392) 0:04:32.038 ******** 2026-01-02 00:56:56.965110 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__fd64e03900f5626ed979ee80d1dba7159ce832bb'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-01-02 00:56:56.965118 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__fd64e03900f5626ed979ee80d1dba7159ce832bb'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-01-02 00:56:56.965125 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__fd64e03900f5626ed979ee80d1dba7159ce832bb'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-01-02 00:56:56.965133 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__fd64e03900f5626ed979ee80d1dba7159ce832bb'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-01-02 00:56:56.965141 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__fd64e03900f5626ed979ee80d1dba7159ce832bb'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-01-02 00:56:56.965148 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__fd64e03900f5626ed979ee80d1dba7159ce832bb'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__fd64e03900f5626ed979ee80d1dba7159ce832bb'}])  2026-01-02 00:56:56.965160 | orchestrator | 2026-01-02 00:56:56.965165 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-01-02 00:56:56.965171 | orchestrator | Friday 02 January 2026 00:50:53 +0000 (0:00:15.523) 0:04:47.561 ******** 2026-01-02 00:56:56.965176 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:56.965182 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:56.965187 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:56.965192 | orchestrator | 2026-01-02 00:56:56.965198 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-01-02 00:56:56.965203 | orchestrator | Friday 02 January 2026 00:50:53 +0000 (0:00:00.316) 0:04:47.878 ******** 2026-01-02 00:56:56.965209 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:56:56.965214 | orchestrator | 2026-01-02 00:56:56.965220 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-01-02 00:56:56.965225 | orchestrator | Friday 02 January 2026 00:50:54 +0000 (0:00:00.612) 0:04:48.491 ******** 2026-01-02 00:56:56.965230 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:56:56.965236 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:56:56.965241 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:56:56.965247 | orchestrator | 2026-01-02 00:56:56.965252 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-01-02 00:56:56.965257 | orchestrator | Friday 02 January 2026 00:50:54 +0000 (0:00:00.300) 0:04:48.791 ******** 2026-01-02 00:56:56.965263 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:56.965268 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:56.965273 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:56.965279 | orchestrator | 2026-01-02 00:56:56.965284 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-01-02 00:56:56.965289 | orchestrator | Friday 02 January 2026 00:50:55 +0000 (0:00:00.277) 0:04:49.069 ******** 2026-01-02 00:56:56.965295 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-01-02 00:56:56.965300 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-01-02 00:56:56.965306 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-01-02 00:56:56.965311 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:56.965316 | orchestrator | 2026-01-02 00:56:56.965322 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-01-02 00:56:56.965327 | orchestrator | Friday 02 January 2026 00:50:55 +0000 (0:00:00.722) 0:04:49.792 ******** 2026-01-02 00:56:56.965332 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:56:56.965338 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:56:56.965361 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:56:56.965367 | orchestrator | 2026-01-02 00:56:56.965373 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2026-01-02 00:56:56.965378 | orchestrator | 2026-01-02 00:56:56.965383 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-01-02 00:56:56.965389 | orchestrator | Friday 02 January 2026 00:50:56 +0000 (0:00:00.705) 0:04:50.497 ******** 2026-01-02 00:56:56.965394 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:56:56.965400 | orchestrator | 2026-01-02 00:56:56.965405 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-01-02 00:56:56.965411 | orchestrator | Friday 02 January 2026 00:50:56 +0000 (0:00:00.394) 0:04:50.891 ******** 2026-01-02 00:56:56.965416 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:56:56.965421 | orchestrator | 2026-01-02 00:56:56.965427 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-01-02 00:56:56.965432 | orchestrator | Friday 02 January 2026 00:50:57 +0000 (0:00:00.546) 0:04:51.438 ******** 2026-01-02 00:56:56.965437 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:56:56.965443 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:56:56.965453 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:56:56.965458 | orchestrator | 2026-01-02 00:56:56.965464 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-01-02 00:56:56.965469 | orchestrator | Friday 02 January 2026 00:50:58 +0000 (0:00:00.790) 0:04:52.229 ******** 2026-01-02 00:56:56.965474 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:56.965480 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:56.965485 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:56.965490 | orchestrator | 2026-01-02 00:56:56.965496 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-01-02 00:56:56.965501 | orchestrator | Friday 02 January 2026 00:50:58 +0000 (0:00:00.278) 0:04:52.507 ******** 2026-01-02 00:56:56.965506 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:56.965512 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:56.965517 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:56.965522 | orchestrator | 2026-01-02 00:56:56.965544 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-01-02 00:56:56.965550 | orchestrator | Friday 02 January 2026 00:50:58 +0000 (0:00:00.258) 0:04:52.766 ******** 2026-01-02 00:56:56.965555 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:56.965560 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:56.965566 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:56.965571 | orchestrator | 2026-01-02 00:56:56.965580 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-01-02 00:56:56.965585 | orchestrator | Friday 02 January 2026 00:50:59 +0000 (0:00:00.432) 0:04:53.199 ******** 2026-01-02 00:56:56.965591 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:56:56.965596 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:56:56.965601 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:56:56.965607 | orchestrator | 2026-01-02 00:56:56.965612 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-01-02 00:56:56.965617 | orchestrator | Friday 02 January 2026 00:50:59 +0000 (0:00:00.723) 0:04:53.922 ******** 2026-01-02 00:56:56.965623 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:56.965628 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:56.965633 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:56.965638 | orchestrator | 2026-01-02 00:56:56.965644 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-01-02 00:56:56.965649 | orchestrator | Friday 02 January 2026 00:51:00 +0000 (0:00:00.516) 0:04:54.438 ******** 2026-01-02 00:56:56.965655 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:56.965660 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:56.965665 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:56.965671 | orchestrator | 2026-01-02 00:56:56.965676 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-01-02 00:56:56.965681 | orchestrator | Friday 02 January 2026 00:51:00 +0000 (0:00:00.291) 0:04:54.729 ******** 2026-01-02 00:56:56.965687 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:56:56.965692 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:56:56.965697 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:56:56.965703 | orchestrator | 2026-01-02 00:56:56.965708 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-01-02 00:56:56.965713 | orchestrator | Friday 02 January 2026 00:51:01 +0000 (0:00:00.938) 0:04:55.668 ******** 2026-01-02 00:56:56.965719 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:56:56.965724 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:56:56.965729 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:56:56.965735 | orchestrator | 2026-01-02 00:56:56.965740 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-01-02 00:56:56.965745 | orchestrator | Friday 02 January 2026 00:51:02 +0000 (0:00:00.786) 0:04:56.455 ******** 2026-01-02 00:56:56.965751 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:56.965756 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:56.965762 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:56.965767 | orchestrator | 2026-01-02 00:56:56.965776 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-01-02 00:56:56.965782 | orchestrator | Friday 02 January 2026 00:51:02 +0000 (0:00:00.293) 0:04:56.749 ******** 2026-01-02 00:56:56.965787 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:56:56.965793 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:56:56.965798 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:56:56.965803 | orchestrator | 2026-01-02 00:56:56.965809 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-01-02 00:56:56.965814 | orchestrator | Friday 02 January 2026 00:51:02 +0000 (0:00:00.259) 0:04:57.008 ******** 2026-01-02 00:56:56.965820 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:56.965825 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:56.965830 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:56.965836 | orchestrator | 2026-01-02 00:56:56.965841 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-01-02 00:56:56.965865 | orchestrator | Friday 02 January 2026 00:51:03 +0000 (0:00:00.394) 0:04:57.403 ******** 2026-01-02 00:56:56.965871 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:56.965877 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:56.965882 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:56.965887 | orchestrator | 2026-01-02 00:56:56.965893 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-01-02 00:56:56.965898 | orchestrator | Friday 02 January 2026 00:51:03 +0000 (0:00:00.232) 0:04:57.636 ******** 2026-01-02 00:56:56.965903 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:56.965909 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:56.965914 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:56.965919 | orchestrator | 2026-01-02 00:56:56.965924 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-01-02 00:56:56.965930 | orchestrator | Friday 02 January 2026 00:51:03 +0000 (0:00:00.278) 0:04:57.915 ******** 2026-01-02 00:56:56.965935 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:56.965941 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:56.965946 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:56.965951 | orchestrator | 2026-01-02 00:56:56.965956 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-01-02 00:56:56.965962 | orchestrator | Friday 02 January 2026 00:51:04 +0000 (0:00:00.241) 0:04:58.157 ******** 2026-01-02 00:56:56.965967 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:56.965972 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:56.965977 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:56.965983 | orchestrator | 2026-01-02 00:56:56.965988 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-01-02 00:56:56.965993 | orchestrator | Friday 02 January 2026 00:51:04 +0000 (0:00:00.417) 0:04:58.575 ******** 2026-01-02 00:56:56.965999 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:56:56.966004 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:56:56.966009 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:56:56.966037 | orchestrator | 2026-01-02 00:56:56.966044 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-01-02 00:56:56.966050 | orchestrator | Friday 02 January 2026 00:51:04 +0000 (0:00:00.295) 0:04:58.870 ******** 2026-01-02 00:56:56.966055 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:56:56.966061 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:56:56.966066 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:56:56.966071 | orchestrator | 2026-01-02 00:56:56.966077 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-01-02 00:56:56.966082 | orchestrator | Friday 02 January 2026 00:51:05 +0000 (0:00:00.338) 0:04:59.209 ******** 2026-01-02 00:56:56.966087 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:56:56.966093 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:56:56.966098 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:56:56.966103 | orchestrator | 2026-01-02 00:56:56.966109 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-01-02 00:56:56.966121 | orchestrator | Friday 02 January 2026 00:51:05 +0000 (0:00:00.602) 0:04:59.812 ******** 2026-01-02 00:56:56.966127 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-01-02 00:56:56.966132 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-02 00:56:56.966138 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-02 00:56:56.966143 | orchestrator | 2026-01-02 00:56:56.966148 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-01-02 00:56:56.966154 | orchestrator | Friday 02 January 2026 00:51:06 +0000 (0:00:00.767) 0:05:00.579 ******** 2026-01-02 00:56:56.966159 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:56:56.966165 | orchestrator | 2026-01-02 00:56:56.966170 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-01-02 00:56:56.966175 | orchestrator | Friday 02 January 2026 00:51:06 +0000 (0:00:00.404) 0:05:00.983 ******** 2026-01-02 00:56:56.966181 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:56:56.966186 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:56:56.966191 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:56:56.966197 | orchestrator | 2026-01-02 00:56:56.966202 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-01-02 00:56:56.966207 | orchestrator | Friday 02 January 2026 00:51:07 +0000 (0:00:00.593) 0:05:01.576 ******** 2026-01-02 00:56:56.966213 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:56.966218 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:56.966223 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:56.966229 | orchestrator | 2026-01-02 00:56:56.966234 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-01-02 00:56:56.966239 | orchestrator | Friday 02 January 2026 00:51:07 +0000 (0:00:00.422) 0:05:01.998 ******** 2026-01-02 00:56:56.966245 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-01-02 00:56:56.966250 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-01-02 00:56:56.966255 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-01-02 00:56:56.966261 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2026-01-02 00:56:56.966266 | orchestrator | 2026-01-02 00:56:56.966271 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-01-02 00:56:56.966277 | orchestrator | Friday 02 January 2026 00:51:19 +0000 (0:00:11.318) 0:05:13.317 ******** 2026-01-02 00:56:56.966282 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:56:56.966287 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:56:56.966293 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:56:56.966298 | orchestrator | 2026-01-02 00:56:56.966303 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-01-02 00:56:56.966309 | orchestrator | Friday 02 January 2026 00:51:19 +0000 (0:00:00.326) 0:05:13.643 ******** 2026-01-02 00:56:56.966314 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-01-02 00:56:56.966319 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-01-02 00:56:56.966325 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-01-02 00:56:56.966330 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-01-02 00:56:56.966335 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-02 00:56:56.966358 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-02 00:56:56.966364 | orchestrator | 2026-01-02 00:56:56.966370 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-01-02 00:56:56.966375 | orchestrator | Friday 02 January 2026 00:51:22 +0000 (0:00:02.445) 0:05:16.089 ******** 2026-01-02 00:56:56.966381 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-01-02 00:56:56.966386 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-01-02 00:56:56.966391 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-01-02 00:56:56.966397 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-01-02 00:56:56.966406 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-01-02 00:56:56.966412 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-01-02 00:56:56.966417 | orchestrator | 2026-01-02 00:56:56.966422 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-01-02 00:56:56.966428 | orchestrator | Friday 02 January 2026 00:51:23 +0000 (0:00:01.447) 0:05:17.537 ******** 2026-01-02 00:56:56.966433 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:56:56.966438 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:56:56.966444 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:56:56.966449 | orchestrator | 2026-01-02 00:56:56.966454 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-01-02 00:56:56.966460 | orchestrator | Friday 02 January 2026 00:51:24 +0000 (0:00:00.820) 0:05:18.357 ******** 2026-01-02 00:56:56.966465 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:56.966470 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:56.966476 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:56.966481 | orchestrator | 2026-01-02 00:56:56.966486 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-01-02 00:56:56.966492 | orchestrator | Friday 02 January 2026 00:51:24 +0000 (0:00:00.254) 0:05:18.612 ******** 2026-01-02 00:56:56.966497 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:56.966502 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:56.966508 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:56.966513 | orchestrator | 2026-01-02 00:56:56.966518 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-01-02 00:56:56.966524 | orchestrator | Friday 02 January 2026 00:51:24 +0000 (0:00:00.253) 0:05:18.866 ******** 2026-01-02 00:56:56.966548 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:56:56.966557 | orchestrator | 2026-01-02 00:56:56.966566 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-01-02 00:56:56.966575 | orchestrator | Friday 02 January 2026 00:51:25 +0000 (0:00:00.618) 0:05:19.484 ******** 2026-01-02 00:56:56.966582 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:56.966591 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:56.966597 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:56.966602 | orchestrator | 2026-01-02 00:56:56.966607 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-01-02 00:56:56.966613 | orchestrator | Friday 02 January 2026 00:51:25 +0000 (0:00:00.284) 0:05:19.769 ******** 2026-01-02 00:56:56.966618 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:56.966623 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:56.966629 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:56.966634 | orchestrator | 2026-01-02 00:56:56.966640 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-01-02 00:56:56.966645 | orchestrator | Friday 02 January 2026 00:51:25 +0000 (0:00:00.284) 0:05:20.053 ******** 2026-01-02 00:56:56.966650 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:56:56.966656 | orchestrator | 2026-01-02 00:56:56.966661 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-01-02 00:56:56.966667 | orchestrator | Friday 02 January 2026 00:51:26 +0000 (0:00:00.444) 0:05:20.497 ******** 2026-01-02 00:56:56.966672 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:56:56.966677 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:56:56.966683 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:56:56.966688 | orchestrator | 2026-01-02 00:56:56.966693 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-01-02 00:56:56.966699 | orchestrator | Friday 02 January 2026 00:51:28 +0000 (0:00:01.650) 0:05:22.148 ******** 2026-01-02 00:56:56.966704 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:56:56.966710 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:56:56.966715 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:56:56.966726 | orchestrator | 2026-01-02 00:56:56.966731 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-01-02 00:56:56.966737 | orchestrator | Friday 02 January 2026 00:51:29 +0000 (0:00:01.209) 0:05:23.357 ******** 2026-01-02 00:56:56.966742 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:56:56.966748 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:56:56.966753 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:56:56.966758 | orchestrator | 2026-01-02 00:56:56.966764 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-01-02 00:56:56.966769 | orchestrator | Friday 02 January 2026 00:51:31 +0000 (0:00:01.981) 0:05:25.339 ******** 2026-01-02 00:56:56.966774 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:56:56.966780 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:56:56.966785 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:56:56.966790 | orchestrator | 2026-01-02 00:56:56.966796 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-01-02 00:56:56.966801 | orchestrator | Friday 02 January 2026 00:51:33 +0000 (0:00:02.211) 0:05:27.550 ******** 2026-01-02 00:56:56.966806 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:56.966812 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:56.966817 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2026-01-02 00:56:56.966823 | orchestrator | 2026-01-02 00:56:56.966828 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2026-01-02 00:56:56.966833 | orchestrator | Friday 02 January 2026 00:51:34 +0000 (0:00:00.580) 0:05:28.130 ******** 2026-01-02 00:56:56.966859 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2026-01-02 00:56:56.966865 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2026-01-02 00:56:56.966871 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (28 retries left). 2026-01-02 00:56:56.966876 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (27 retries left). 2026-01-02 00:56:56.966882 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (26 retries left). 2026-01-02 00:56:56.966887 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (25 retries left). 2026-01-02 00:56:56.966892 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-01-02 00:56:56.966898 | orchestrator | 2026-01-02 00:56:56.966903 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2026-01-02 00:56:56.966909 | orchestrator | Friday 02 January 2026 00:52:10 +0000 (0:00:36.128) 0:06:04.258 ******** 2026-01-02 00:56:56.966914 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-01-02 00:56:56.966919 | orchestrator | 2026-01-02 00:56:56.966925 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2026-01-02 00:56:56.966930 | orchestrator | Friday 02 January 2026 00:52:11 +0000 (0:00:01.338) 0:06:05.596 ******** 2026-01-02 00:56:56.966935 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:56:56.966941 | orchestrator | 2026-01-02 00:56:56.966946 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2026-01-02 00:56:56.966951 | orchestrator | Friday 02 January 2026 00:52:11 +0000 (0:00:00.318) 0:06:05.915 ******** 2026-01-02 00:56:56.966957 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:56:56.966962 | orchestrator | 2026-01-02 00:56:56.966967 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2026-01-02 00:56:56.966972 | orchestrator | Friday 02 January 2026 00:52:12 +0000 (0:00:00.161) 0:06:06.077 ******** 2026-01-02 00:56:56.966978 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2026-01-02 00:56:56.966983 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2026-01-02 00:56:56.966988 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2026-01-02 00:56:56.966999 | orchestrator | 2026-01-02 00:56:56.967004 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2026-01-02 00:56:56.967014 | orchestrator | Friday 02 January 2026 00:52:18 +0000 (0:00:06.807) 0:06:12.884 ******** 2026-01-02 00:56:56.967020 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2026-01-02 00:56:56.967025 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2026-01-02 00:56:56.967031 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2026-01-02 00:56:56.967036 | orchestrator | skipping: [testbed-node-2] => (item=status)  2026-01-02 00:56:56.967041 | orchestrator | 2026-01-02 00:56:56.967047 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-01-02 00:56:56.967052 | orchestrator | Friday 02 January 2026 00:52:23 +0000 (0:00:05.069) 0:06:17.953 ******** 2026-01-02 00:56:56.967058 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:56:56.967063 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:56:56.967068 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:56:56.967074 | orchestrator | 2026-01-02 00:56:56.967079 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-01-02 00:56:56.967085 | orchestrator | Friday 02 January 2026 00:52:24 +0000 (0:00:00.628) 0:06:18.582 ******** 2026-01-02 00:56:56.967090 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:56:56.967095 | orchestrator | 2026-01-02 00:56:56.967101 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-01-02 00:56:56.967106 | orchestrator | Friday 02 January 2026 00:52:24 +0000 (0:00:00.449) 0:06:19.032 ******** 2026-01-02 00:56:56.967112 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:56:56.967117 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:56:56.967122 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:56:56.967128 | orchestrator | 2026-01-02 00:56:56.967133 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-01-02 00:56:56.967139 | orchestrator | Friday 02 January 2026 00:52:25 +0000 (0:00:00.450) 0:06:19.483 ******** 2026-01-02 00:56:56.967144 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:56:56.967149 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:56:56.967155 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:56:56.967160 | orchestrator | 2026-01-02 00:56:56.967165 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-01-02 00:56:56.967171 | orchestrator | Friday 02 January 2026 00:52:26 +0000 (0:00:01.323) 0:06:20.807 ******** 2026-01-02 00:56:56.967176 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-01-02 00:56:56.967182 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-01-02 00:56:56.967187 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-01-02 00:56:56.967192 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:56.967198 | orchestrator | 2026-01-02 00:56:56.967203 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-01-02 00:56:56.967208 | orchestrator | Friday 02 January 2026 00:52:27 +0000 (0:00:00.534) 0:06:21.341 ******** 2026-01-02 00:56:56.967214 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:56:56.967219 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:56:56.967224 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:56:56.967230 | orchestrator | 2026-01-02 00:56:56.967235 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2026-01-02 00:56:56.967241 | orchestrator | 2026-01-02 00:56:56.967246 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-01-02 00:56:56.967270 | orchestrator | Friday 02 January 2026 00:52:27 +0000 (0:00:00.502) 0:06:21.844 ******** 2026-01-02 00:56:56.967276 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-02 00:56:56.967282 | orchestrator | 2026-01-02 00:56:56.967287 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-01-02 00:56:56.967297 | orchestrator | Friday 02 January 2026 00:52:28 +0000 (0:00:00.688) 0:06:22.533 ******** 2026-01-02 00:56:56.967303 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-02 00:56:56.967308 | orchestrator | 2026-01-02 00:56:56.967313 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-01-02 00:56:56.967319 | orchestrator | Friday 02 January 2026 00:52:28 +0000 (0:00:00.494) 0:06:23.027 ******** 2026-01-02 00:56:56.967324 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:56:56.967329 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:56:56.967335 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:56:56.967340 | orchestrator | 2026-01-02 00:56:56.967345 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-01-02 00:56:56.967351 | orchestrator | Friday 02 January 2026 00:52:29 +0000 (0:00:00.569) 0:06:23.596 ******** 2026-01-02 00:56:56.967356 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:56:56.967361 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:56:56.967367 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:56:56.967372 | orchestrator | 2026-01-02 00:56:56.967377 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-01-02 00:56:56.967383 | orchestrator | Friday 02 January 2026 00:52:30 +0000 (0:00:00.741) 0:06:24.338 ******** 2026-01-02 00:56:56.967388 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:56:56.967393 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:56:56.967399 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:56:56.967404 | orchestrator | 2026-01-02 00:56:56.967409 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-01-02 00:56:56.967415 | orchestrator | Friday 02 January 2026 00:52:31 +0000 (0:00:00.824) 0:06:25.162 ******** 2026-01-02 00:56:56.967420 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:56:56.967426 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:56:56.967431 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:56:56.967436 | orchestrator | 2026-01-02 00:56:56.967442 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-01-02 00:56:56.967447 | orchestrator | Friday 02 January 2026 00:52:31 +0000 (0:00:00.725) 0:06:25.888 ******** 2026-01-02 00:56:56.967452 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:56:56.967458 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:56:56.967463 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:56:56.967468 | orchestrator | 2026-01-02 00:56:56.967477 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-01-02 00:56:56.967482 | orchestrator | Friday 02 January 2026 00:52:32 +0000 (0:00:00.473) 0:06:26.362 ******** 2026-01-02 00:56:56.967488 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:56:56.967493 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:56:56.967498 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:56:56.967504 | orchestrator | 2026-01-02 00:56:56.967509 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-01-02 00:56:56.967514 | orchestrator | Friday 02 January 2026 00:52:32 +0000 (0:00:00.271) 0:06:26.633 ******** 2026-01-02 00:56:56.967520 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:56:56.967525 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:56:56.967562 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:56:56.967568 | orchestrator | 2026-01-02 00:56:56.967574 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-01-02 00:56:56.967579 | orchestrator | Friday 02 January 2026 00:52:32 +0000 (0:00:00.218) 0:06:26.851 ******** 2026-01-02 00:56:56.967584 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:56:56.967590 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:56:56.967595 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:56:56.967600 | orchestrator | 2026-01-02 00:56:56.967606 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-01-02 00:56:56.967611 | orchestrator | Friday 02 January 2026 00:52:33 +0000 (0:00:00.663) 0:06:27.515 ******** 2026-01-02 00:56:56.967623 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:56:56.967629 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:56:56.967634 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:56:56.967639 | orchestrator | 2026-01-02 00:56:56.967645 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-01-02 00:56:56.967650 | orchestrator | Friday 02 January 2026 00:52:34 +0000 (0:00:00.993) 0:06:28.508 ******** 2026-01-02 00:56:56.967654 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:56:56.967659 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:56:56.967664 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:56:56.967669 | orchestrator | 2026-01-02 00:56:56.967673 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-01-02 00:56:56.967678 | orchestrator | Friday 02 January 2026 00:52:34 +0000 (0:00:00.301) 0:06:28.810 ******** 2026-01-02 00:56:56.967683 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:56:56.967688 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:56:56.967692 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:56:56.967697 | orchestrator | 2026-01-02 00:56:56.967702 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-01-02 00:56:56.967707 | orchestrator | Friday 02 January 2026 00:52:35 +0000 (0:00:00.248) 0:06:29.059 ******** 2026-01-02 00:56:56.967711 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:56:56.967716 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:56:56.967721 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:56:56.967726 | orchestrator | 2026-01-02 00:56:56.967731 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-01-02 00:56:56.967735 | orchestrator | Friday 02 January 2026 00:52:35 +0000 (0:00:00.237) 0:06:29.296 ******** 2026-01-02 00:56:56.967740 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:56:56.967745 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:56:56.967750 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:56:56.967754 | orchestrator | 2026-01-02 00:56:56.967759 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-01-02 00:56:56.967767 | orchestrator | Friday 02 January 2026 00:52:35 +0000 (0:00:00.564) 0:06:29.861 ******** 2026-01-02 00:56:56.967772 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:56:56.967776 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:56:56.967781 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:56:56.967786 | orchestrator | 2026-01-02 00:56:56.967791 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-01-02 00:56:56.967795 | orchestrator | Friday 02 January 2026 00:52:36 +0000 (0:00:00.292) 0:06:30.153 ******** 2026-01-02 00:56:56.967800 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:56:56.967805 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:56:56.967810 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:56:56.967814 | orchestrator | 2026-01-02 00:56:56.967819 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-01-02 00:56:56.967824 | orchestrator | Friday 02 January 2026 00:52:36 +0000 (0:00:00.240) 0:06:30.394 ******** 2026-01-02 00:56:56.967829 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:56:56.967834 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:56:56.967838 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:56:56.967843 | orchestrator | 2026-01-02 00:56:56.967848 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-01-02 00:56:56.967852 | orchestrator | Friday 02 January 2026 00:52:36 +0000 (0:00:00.215) 0:06:30.609 ******** 2026-01-02 00:56:56.967857 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:56:56.967862 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:56:56.967867 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:56:56.967871 | orchestrator | 2026-01-02 00:56:56.967876 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-01-02 00:56:56.967881 | orchestrator | Friday 02 January 2026 00:52:36 +0000 (0:00:00.362) 0:06:30.972 ******** 2026-01-02 00:56:56.967886 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:56:56.967894 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:56:56.967899 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:56:56.967904 | orchestrator | 2026-01-02 00:56:56.967909 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-01-02 00:56:56.967913 | orchestrator | Friday 02 January 2026 00:52:37 +0000 (0:00:00.274) 0:06:31.247 ******** 2026-01-02 00:56:56.967918 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:56:56.967923 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:56:56.967927 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:56:56.967932 | orchestrator | 2026-01-02 00:56:56.967937 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-01-02 00:56:56.967942 | orchestrator | Friday 02 January 2026 00:52:37 +0000 (0:00:00.417) 0:06:31.664 ******** 2026-01-02 00:56:56.967946 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:56:56.967951 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:56:56.967956 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:56:56.967961 | orchestrator | 2026-01-02 00:56:56.967965 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-01-02 00:56:56.967974 | orchestrator | Friday 02 January 2026 00:52:37 +0000 (0:00:00.383) 0:06:32.047 ******** 2026-01-02 00:56:56.967979 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-02 00:56:56.967983 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-02 00:56:56.967988 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-02 00:56:56.967993 | orchestrator | 2026-01-02 00:56:56.967998 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-01-02 00:56:56.968003 | orchestrator | Friday 02 January 2026 00:52:38 +0000 (0:00:00.456) 0:06:32.503 ******** 2026-01-02 00:56:56.968007 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-02 00:56:56.968012 | orchestrator | 2026-01-02 00:56:56.968017 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-01-02 00:56:56.968022 | orchestrator | Friday 02 January 2026 00:52:38 +0000 (0:00:00.400) 0:06:32.904 ******** 2026-01-02 00:56:56.968026 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:56:56.968031 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:56:56.968036 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:56:56.968041 | orchestrator | 2026-01-02 00:56:56.968046 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-01-02 00:56:56.968050 | orchestrator | Friday 02 January 2026 00:52:39 +0000 (0:00:00.451) 0:06:33.355 ******** 2026-01-02 00:56:56.968055 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:56:56.968060 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:56:56.968065 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:56:56.968069 | orchestrator | 2026-01-02 00:56:56.968074 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-01-02 00:56:56.968079 | orchestrator | Friday 02 January 2026 00:52:39 +0000 (0:00:00.283) 0:06:33.639 ******** 2026-01-02 00:56:56.968084 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:56:56.968088 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:56:56.968093 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:56:56.968098 | orchestrator | 2026-01-02 00:56:56.968103 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-01-02 00:56:56.968107 | orchestrator | Friday 02 January 2026 00:52:40 +0000 (0:00:00.618) 0:06:34.258 ******** 2026-01-02 00:56:56.968112 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:56:56.968117 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:56:56.968122 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:56:56.968126 | orchestrator | 2026-01-02 00:56:56.968131 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-01-02 00:56:56.968136 | orchestrator | Friday 02 January 2026 00:52:40 +0000 (0:00:00.362) 0:06:34.621 ******** 2026-01-02 00:56:56.968141 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-01-02 00:56:56.968150 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-01-02 00:56:56.968155 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-01-02 00:56:56.968164 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-01-02 00:56:56.968169 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-01-02 00:56:56.968174 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-01-02 00:56:56.968179 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-01-02 00:56:56.968184 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-01-02 00:56:56.968188 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-01-02 00:56:56.968193 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-01-02 00:56:56.968198 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-01-02 00:56:56.968203 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-01-02 00:56:56.968207 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-01-02 00:56:56.968212 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-01-02 00:56:56.968217 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-01-02 00:56:56.968221 | orchestrator | 2026-01-02 00:56:56.968226 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-01-02 00:56:56.968231 | orchestrator | Friday 02 January 2026 00:52:44 +0000 (0:00:03.521) 0:06:38.142 ******** 2026-01-02 00:56:56.968236 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:56:56.968240 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:56:56.968245 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:56:56.968250 | orchestrator | 2026-01-02 00:56:56.968255 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-01-02 00:56:56.968260 | orchestrator | Friday 02 January 2026 00:52:44 +0000 (0:00:00.235) 0:06:38.377 ******** 2026-01-02 00:56:56.968264 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-02 00:56:56.968269 | orchestrator | 2026-01-02 00:56:56.968274 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-01-02 00:56:56.968279 | orchestrator | Friday 02 January 2026 00:52:44 +0000 (0:00:00.400) 0:06:38.777 ******** 2026-01-02 00:56:56.968283 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2026-01-02 00:56:56.968291 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2026-01-02 00:56:56.968296 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2026-01-02 00:56:56.968300 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2026-01-02 00:56:56.968305 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2026-01-02 00:56:56.968310 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2026-01-02 00:56:56.968315 | orchestrator | 2026-01-02 00:56:56.968320 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-01-02 00:56:56.968324 | orchestrator | Friday 02 January 2026 00:52:45 +0000 (0:00:00.991) 0:06:39.769 ******** 2026-01-02 00:56:56.968329 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-02 00:56:56.968334 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-01-02 00:56:56.968338 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-01-02 00:56:56.968343 | orchestrator | 2026-01-02 00:56:56.968348 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-01-02 00:56:56.968357 | orchestrator | Friday 02 January 2026 00:52:48 +0000 (0:00:02.508) 0:06:42.277 ******** 2026-01-02 00:56:56.968361 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-01-02 00:56:56.968366 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-01-02 00:56:56.968371 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:56:56.968376 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-01-02 00:56:56.968381 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-01-02 00:56:56.968385 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-01-02 00:56:56.968390 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-01-02 00:56:56.968395 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:56:56.968399 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:56:56.968404 | orchestrator | 2026-01-02 00:56:56.968409 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-01-02 00:56:56.968414 | orchestrator | Friday 02 January 2026 00:52:49 +0000 (0:00:01.060) 0:06:43.337 ******** 2026-01-02 00:56:56.968419 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-02 00:56:56.968423 | orchestrator | 2026-01-02 00:56:56.968428 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-01-02 00:56:56.968433 | orchestrator | Friday 02 January 2026 00:52:51 +0000 (0:00:02.208) 0:06:45.546 ******** 2026-01-02 00:56:56.968438 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-02 00:56:56.968443 | orchestrator | 2026-01-02 00:56:56.968447 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2026-01-02 00:56:56.968452 | orchestrator | Friday 02 January 2026 00:52:51 +0000 (0:00:00.475) 0:06:46.021 ******** 2026-01-02 00:56:56.968457 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-804dd052-7dd8-5ffa-9f76-70ebd20e36f7', 'data_vg': 'ceph-804dd052-7dd8-5ffa-9f76-70ebd20e36f7'}) 2026-01-02 00:56:56.968463 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-fa5ccc98-5ec0-5843-b525-cc12dffb9804', 'data_vg': 'ceph-fa5ccc98-5ec0-5843-b525-cc12dffb9804'}) 2026-01-02 00:56:56.968470 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-319da19b-b53c-570d-92cc-c377bf830026', 'data_vg': 'ceph-319da19b-b53c-570d-92cc-c377bf830026'}) 2026-01-02 00:56:56.968475 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-8699efe3-2ea7-5359-bcef-4eac218b02a9', 'data_vg': 'ceph-8699efe3-2ea7-5359-bcef-4eac218b02a9'}) 2026-01-02 00:56:56.968480 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-aabdb1ab-3cea-5cae-90fa-5f0cfaabc1a0', 'data_vg': 'ceph-aabdb1ab-3cea-5cae-90fa-5f0cfaabc1a0'}) 2026-01-02 00:56:56.968485 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-1e1b73ff-0d48-5f4d-91db-a8c1f08fc0ce', 'data_vg': 'ceph-1e1b73ff-0d48-5f4d-91db-a8c1f08fc0ce'}) 2026-01-02 00:56:56.968490 | orchestrator | 2026-01-02 00:56:56.968495 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-01-02 00:56:56.968500 | orchestrator | Friday 02 January 2026 00:53:36 +0000 (0:00:44.243) 0:07:30.265 ******** 2026-01-02 00:56:56.968505 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:56:56.968509 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:56:56.968514 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:56:56.968519 | orchestrator | 2026-01-02 00:56:56.968524 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-01-02 00:56:56.968544 | orchestrator | Friday 02 January 2026 00:53:36 +0000 (0:00:00.349) 0:07:30.614 ******** 2026-01-02 00:56:56.968549 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-02 00:56:56.968554 | orchestrator | 2026-01-02 00:56:56.968559 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-01-02 00:56:56.968564 | orchestrator | Friday 02 January 2026 00:53:37 +0000 (0:00:00.581) 0:07:31.196 ******** 2026-01-02 00:56:56.968568 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:56:56.968577 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:56:56.968582 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:56:56.968587 | orchestrator | 2026-01-02 00:56:56.968591 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-01-02 00:56:56.968596 | orchestrator | Friday 02 January 2026 00:53:38 +0000 (0:00:00.937) 0:07:32.133 ******** 2026-01-02 00:56:56.968601 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:56:56.968606 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:56:56.968610 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:56:56.968615 | orchestrator | 2026-01-02 00:56:56.968620 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-01-02 00:56:56.968628 | orchestrator | Friday 02 January 2026 00:53:40 +0000 (0:00:02.630) 0:07:34.763 ******** 2026-01-02 00:56:56.968633 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-02 00:56:56.968637 | orchestrator | 2026-01-02 00:56:56.968642 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-01-02 00:56:56.968647 | orchestrator | Friday 02 January 2026 00:53:41 +0000 (0:00:00.509) 0:07:35.273 ******** 2026-01-02 00:56:56.968652 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:56:56.968656 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:56:56.968661 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:56:56.968666 | orchestrator | 2026-01-02 00:56:56.968671 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-01-02 00:56:56.968675 | orchestrator | Friday 02 January 2026 00:53:42 +0000 (0:00:01.441) 0:07:36.714 ******** 2026-01-02 00:56:56.968680 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:56:56.968685 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:56:56.968690 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:56:56.968695 | orchestrator | 2026-01-02 00:56:56.968699 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-01-02 00:56:56.968704 | orchestrator | Friday 02 January 2026 00:53:43 +0000 (0:00:01.177) 0:07:37.892 ******** 2026-01-02 00:56:56.968709 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:56:56.968713 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:56:56.968718 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:56:56.968723 | orchestrator | 2026-01-02 00:56:56.968728 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-01-02 00:56:56.968732 | orchestrator | Friday 02 January 2026 00:53:45 +0000 (0:00:01.823) 0:07:39.716 ******** 2026-01-02 00:56:56.968737 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:56:56.968742 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:56:56.968747 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:56:56.968751 | orchestrator | 2026-01-02 00:56:56.968756 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-01-02 00:56:56.968761 | orchestrator | Friday 02 January 2026 00:53:45 +0000 (0:00:00.334) 0:07:40.050 ******** 2026-01-02 00:56:56.968766 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:56:56.968770 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:56:56.968775 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:56:56.968780 | orchestrator | 2026-01-02 00:56:56.968785 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-01-02 00:56:56.968789 | orchestrator | Friday 02 January 2026 00:53:46 +0000 (0:00:00.602) 0:07:40.653 ******** 2026-01-02 00:56:56.968794 | orchestrator | ok: [testbed-node-3] => (item=1) 2026-01-02 00:56:56.968799 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-01-02 00:56:56.968804 | orchestrator | ok: [testbed-node-5] => (item=3) 2026-01-02 00:56:56.968808 | orchestrator | ok: [testbed-node-3] => (item=4) 2026-01-02 00:56:56.968813 | orchestrator | ok: [testbed-node-4] => (item=5) 2026-01-02 00:56:56.968818 | orchestrator | ok: [testbed-node-5] => (item=2) 2026-01-02 00:56:56.968823 | orchestrator | 2026-01-02 00:56:56.968827 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-01-02 00:56:56.968832 | orchestrator | Friday 02 January 2026 00:53:47 +0000 (0:00:01.034) 0:07:41.687 ******** 2026-01-02 00:56:56.968840 | orchestrator | changed: [testbed-node-3] => (item=1) 2026-01-02 00:56:56.968845 | orchestrator | changed: [testbed-node-4] => (item=0) 2026-01-02 00:56:56.968853 | orchestrator | changed: [testbed-node-5] => (item=3) 2026-01-02 00:56:56.968858 | orchestrator | changed: [testbed-node-4] => (item=5) 2026-01-02 00:56:56.968863 | orchestrator | changed: [testbed-node-3] => (item=4) 2026-01-02 00:56:56.968868 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-01-02 00:56:56.968872 | orchestrator | 2026-01-02 00:56:56.968877 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-01-02 00:56:56.968882 | orchestrator | Friday 02 January 2026 00:53:49 +0000 (0:00:02.292) 0:07:43.980 ******** 2026-01-02 00:56:56.968887 | orchestrator | changed: [testbed-node-4] => (item=0) 2026-01-02 00:56:56.968892 | orchestrator | changed: [testbed-node-5] => (item=3) 2026-01-02 00:56:56.968896 | orchestrator | changed: [testbed-node-3] => (item=1) 2026-01-02 00:56:56.968901 | orchestrator | changed: [testbed-node-4] => (item=5) 2026-01-02 00:56:56.968906 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-01-02 00:56:56.968911 | orchestrator | changed: [testbed-node-3] => (item=4) 2026-01-02 00:56:56.968915 | orchestrator | 2026-01-02 00:56:56.968920 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-01-02 00:56:56.968925 | orchestrator | Friday 02 January 2026 00:53:53 +0000 (0:00:03.896) 0:07:47.876 ******** 2026-01-02 00:56:56.968930 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:56:56.968934 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:56:56.968939 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-01-02 00:56:56.968944 | orchestrator | 2026-01-02 00:56:56.968949 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-01-02 00:56:56.968953 | orchestrator | Friday 02 January 2026 00:53:56 +0000 (0:00:02.743) 0:07:50.619 ******** 2026-01-02 00:56:56.968958 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:56:56.968963 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:56:56.968968 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2026-01-02 00:56:56.968973 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-01-02 00:56:56.968977 | orchestrator | 2026-01-02 00:56:56.968995 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-01-02 00:56:56.969000 | orchestrator | Friday 02 January 2026 00:54:09 +0000 (0:00:12.597) 0:08:03.217 ******** 2026-01-02 00:56:56.969005 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:56:56.969010 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:56:56.969015 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:56:56.969019 | orchestrator | 2026-01-02 00:56:56.969024 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-01-02 00:56:56.969029 | orchestrator | Friday 02 January 2026 00:54:10 +0000 (0:00:01.091) 0:08:04.309 ******** 2026-01-02 00:56:56.969034 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:56:56.969041 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:56:56.969046 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:56:56.969051 | orchestrator | 2026-01-02 00:56:56.969056 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-01-02 00:56:56.969060 | orchestrator | Friday 02 January 2026 00:54:10 +0000 (0:00:00.419) 0:08:04.728 ******** 2026-01-02 00:56:56.969065 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-02 00:56:56.969070 | orchestrator | 2026-01-02 00:56:56.969075 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-01-02 00:56:56.969080 | orchestrator | Friday 02 January 2026 00:54:11 +0000 (0:00:00.579) 0:08:05.307 ******** 2026-01-02 00:56:56.969084 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-02 00:56:56.969089 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-02 00:56:56.969097 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-02 00:56:56.969102 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:56:56.969107 | orchestrator | 2026-01-02 00:56:56.969112 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-01-02 00:56:56.969116 | orchestrator | Friday 02 January 2026 00:54:11 +0000 (0:00:00.473) 0:08:05.781 ******** 2026-01-02 00:56:56.969121 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:56:56.969126 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:56:56.969131 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:56:56.969135 | orchestrator | 2026-01-02 00:56:56.969140 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-01-02 00:56:56.969145 | orchestrator | Friday 02 January 2026 00:54:12 +0000 (0:00:00.400) 0:08:06.181 ******** 2026-01-02 00:56:56.969150 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:56:56.969154 | orchestrator | 2026-01-02 00:56:56.969159 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-01-02 00:56:56.969231 | orchestrator | Friday 02 January 2026 00:54:12 +0000 (0:00:00.205) 0:08:06.386 ******** 2026-01-02 00:56:56.969237 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:56:56.969242 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:56:56.969246 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:56:56.969251 | orchestrator | 2026-01-02 00:56:56.969256 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-01-02 00:56:56.969261 | orchestrator | Friday 02 January 2026 00:54:12 +0000 (0:00:00.313) 0:08:06.700 ******** 2026-01-02 00:56:56.969265 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:56:56.969270 | orchestrator | 2026-01-02 00:56:56.969275 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-01-02 00:56:56.969280 | orchestrator | Friday 02 January 2026 00:54:12 +0000 (0:00:00.206) 0:08:06.906 ******** 2026-01-02 00:56:56.969284 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:56:56.969289 | orchestrator | 2026-01-02 00:56:56.969294 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-01-02 00:56:56.969299 | orchestrator | Friday 02 January 2026 00:54:13 +0000 (0:00:00.216) 0:08:07.123 ******** 2026-01-02 00:56:56.969303 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:56:56.969308 | orchestrator | 2026-01-02 00:56:56.969313 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-01-02 00:56:56.969318 | orchestrator | Friday 02 January 2026 00:54:13 +0000 (0:00:00.118) 0:08:07.241 ******** 2026-01-02 00:56:56.969327 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:56:56.969332 | orchestrator | 2026-01-02 00:56:56.969337 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-01-02 00:56:56.969342 | orchestrator | Friday 02 January 2026 00:54:13 +0000 (0:00:00.199) 0:08:07.441 ******** 2026-01-02 00:56:56.969347 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:56:56.969351 | orchestrator | 2026-01-02 00:56:56.969356 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-01-02 00:56:56.969361 | orchestrator | Friday 02 January 2026 00:54:13 +0000 (0:00:00.210) 0:08:07.652 ******** 2026-01-02 00:56:56.969366 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-02 00:56:56.969371 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-02 00:56:56.969375 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-02 00:56:56.969380 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:56:56.969385 | orchestrator | 2026-01-02 00:56:56.969390 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-01-02 00:56:56.969395 | orchestrator | Friday 02 January 2026 00:54:14 +0000 (0:00:00.772) 0:08:08.424 ******** 2026-01-02 00:56:56.969408 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:56:56.969413 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:56:56.969418 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:56:56.969422 | orchestrator | 2026-01-02 00:56:56.969427 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-01-02 00:56:56.969437 | orchestrator | Friday 02 January 2026 00:54:14 +0000 (0:00:00.287) 0:08:08.712 ******** 2026-01-02 00:56:56.969442 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:56:56.969447 | orchestrator | 2026-01-02 00:56:56.969452 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-01-02 00:56:56.969457 | orchestrator | Friday 02 January 2026 00:54:14 +0000 (0:00:00.198) 0:08:08.911 ******** 2026-01-02 00:56:56.969461 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:56:56.969466 | orchestrator | 2026-01-02 00:56:56.969471 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2026-01-02 00:56:56.969476 | orchestrator | 2026-01-02 00:56:56.969480 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-01-02 00:56:56.969485 | orchestrator | Friday 02 January 2026 00:54:15 +0000 (0:00:00.630) 0:08:09.541 ******** 2026-01-02 00:56:56.969490 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:56:56.969496 | orchestrator | 2026-01-02 00:56:56.969501 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-01-02 00:56:56.969511 | orchestrator | Friday 02 January 2026 00:54:16 +0000 (0:00:01.055) 0:08:10.597 ******** 2026-01-02 00:56:56.969516 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:56:56.969521 | orchestrator | 2026-01-02 00:56:56.969525 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-01-02 00:56:56.969547 | orchestrator | Friday 02 January 2026 00:54:17 +0000 (0:00:01.057) 0:08:11.654 ******** 2026-01-02 00:56:56.969552 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:56:56.969557 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:56:56.969561 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:56:56.969566 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:56:56.969571 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:56:56.969576 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:56:56.969581 | orchestrator | 2026-01-02 00:56:56.969585 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-01-02 00:56:56.969590 | orchestrator | Friday 02 January 2026 00:54:18 +0000 (0:00:01.046) 0:08:12.700 ******** 2026-01-02 00:56:56.969595 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:56.969600 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:56.969604 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:56:56.969609 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:56:56.969614 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:56.969619 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:56:56.969624 | orchestrator | 2026-01-02 00:56:56.969628 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-01-02 00:56:56.969633 | orchestrator | Friday 02 January 2026 00:54:19 +0000 (0:00:00.701) 0:08:13.402 ******** 2026-01-02 00:56:56.969638 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:56:56.969643 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:56:56.969648 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:56.969652 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:56.969657 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:56.969662 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:56:56.969667 | orchestrator | 2026-01-02 00:56:56.969672 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-01-02 00:56:56.969676 | orchestrator | Friday 02 January 2026 00:54:20 +0000 (0:00:00.985) 0:08:14.387 ******** 2026-01-02 00:56:56.969689 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:56.969694 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:56:56.969699 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:56:56.969704 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:56.969708 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:56:56.969713 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:56.969722 | orchestrator | 2026-01-02 00:56:56.969727 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-01-02 00:56:56.969731 | orchestrator | Friday 02 January 2026 00:54:21 +0000 (0:00:00.903) 0:08:15.291 ******** 2026-01-02 00:56:56.969736 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:56:56.969741 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:56:56.969746 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:56:56.969751 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:56:56.969755 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:56:56.969760 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:56:56.969765 | orchestrator | 2026-01-02 00:56:56.969770 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-01-02 00:56:56.969778 | orchestrator | Friday 02 January 2026 00:54:22 +0000 (0:00:01.363) 0:08:16.654 ******** 2026-01-02 00:56:56.969809 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:56:56.969814 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:56:56.969819 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:56:56.969824 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:56.969829 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:56.969833 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:56.969838 | orchestrator | 2026-01-02 00:56:56.969843 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-01-02 00:56:56.969848 | orchestrator | Friday 02 January 2026 00:54:23 +0000 (0:00:00.656) 0:08:17.311 ******** 2026-01-02 00:56:56.969853 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:56:56.969857 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:56:56.969862 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:56:56.969867 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:56.969871 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:56.969876 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:56.969881 | orchestrator | 2026-01-02 00:56:56.969886 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-01-02 00:56:56.969890 | orchestrator | Friday 02 January 2026 00:54:24 +0000 (0:00:00.838) 0:08:18.149 ******** 2026-01-02 00:56:56.969895 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:56:56.969900 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:56:56.969905 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:56:56.969909 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:56:56.969914 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:56:56.969919 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:56:56.969923 | orchestrator | 2026-01-02 00:56:56.969928 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-01-02 00:56:56.969933 | orchestrator | Friday 02 January 2026 00:54:25 +0000 (0:00:01.004) 0:08:19.153 ******** 2026-01-02 00:56:56.969938 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:56:56.969942 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:56:56.969947 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:56:56.969952 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:56:56.969957 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:56:56.969961 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:56:56.969966 | orchestrator | 2026-01-02 00:56:56.969971 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-01-02 00:56:56.969976 | orchestrator | Friday 02 January 2026 00:54:26 +0000 (0:00:01.294) 0:08:20.448 ******** 2026-01-02 00:56:56.969980 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:56:56.969985 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:56:56.969990 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:56:56.969995 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:56.969999 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:56.970004 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:56.970009 | orchestrator | 2026-01-02 00:56:56.970038 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-01-02 00:56:56.970048 | orchestrator | Friday 02 January 2026 00:54:27 +0000 (0:00:00.623) 0:08:21.071 ******** 2026-01-02 00:56:56.970057 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:56:56.970062 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:56:56.970066 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:56:56.970071 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:56:56.970076 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:56:56.970081 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:56:56.970086 | orchestrator | 2026-01-02 00:56:56.970090 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-01-02 00:56:56.970095 | orchestrator | Friday 02 January 2026 00:54:27 +0000 (0:00:00.819) 0:08:21.891 ******** 2026-01-02 00:56:56.970100 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:56:56.970105 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:56:56.970110 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:56:56.970114 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:56.970119 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:56.970124 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:56.970129 | orchestrator | 2026-01-02 00:56:56.970133 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-01-02 00:56:56.970138 | orchestrator | Friday 02 January 2026 00:54:28 +0000 (0:00:00.593) 0:08:22.484 ******** 2026-01-02 00:56:56.970143 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:56:56.970148 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:56:56.970153 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:56:56.970157 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:56.970162 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:56.970167 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:56.970172 | orchestrator | 2026-01-02 00:56:56.970176 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-01-02 00:56:56.970181 | orchestrator | Friday 02 January 2026 00:54:29 +0000 (0:00:00.782) 0:08:23.266 ******** 2026-01-02 00:56:56.970186 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:56:56.970191 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:56:56.970195 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:56:56.970200 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:56.970205 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:56.970210 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:56.970215 | orchestrator | 2026-01-02 00:56:56.970219 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-01-02 00:56:56.970224 | orchestrator | Friday 02 January 2026 00:54:29 +0000 (0:00:00.609) 0:08:23.876 ******** 2026-01-02 00:56:56.970229 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:56:56.970234 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:56:56.970238 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:56:56.970243 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:56.970248 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:56.970253 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:56.970257 | orchestrator | 2026-01-02 00:56:56.970262 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-01-02 00:56:56.970267 | orchestrator | Friday 02 January 2026 00:54:30 +0000 (0:00:00.894) 0:08:24.770 ******** 2026-01-02 00:56:56.970272 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:56:56.970277 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:56:56.970282 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:56:56.970286 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:56:56.970291 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:56:56.970296 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:56:56.970301 | orchestrator | 2026-01-02 00:56:56.970306 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-01-02 00:56:56.970314 | orchestrator | Friday 02 January 2026 00:54:31 +0000 (0:00:00.625) 0:08:25.396 ******** 2026-01-02 00:56:56.970319 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:56:56.970324 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:56:56.970329 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:56:56.970334 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:56:56.970342 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:56:56.970347 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:56:56.970352 | orchestrator | 2026-01-02 00:56:56.970357 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-01-02 00:56:56.970362 | orchestrator | Friday 02 January 2026 00:54:32 +0000 (0:00:00.894) 0:08:26.291 ******** 2026-01-02 00:56:56.970367 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:56:56.970371 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:56:56.970376 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:56:56.970381 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:56:56.970386 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:56:56.970390 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:56:56.970395 | orchestrator | 2026-01-02 00:56:56.970400 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-01-02 00:56:56.970405 | orchestrator | Friday 02 January 2026 00:54:32 +0000 (0:00:00.591) 0:08:26.882 ******** 2026-01-02 00:56:56.970410 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:56:56.970415 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:56:56.970419 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:56:56.970424 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:56:56.970429 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:56:56.970434 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:56:56.970439 | orchestrator | 2026-01-02 00:56:56.970443 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2026-01-02 00:56:56.970448 | orchestrator | Friday 02 January 2026 00:54:34 +0000 (0:00:01.235) 0:08:28.118 ******** 2026-01-02 00:56:56.970453 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-02 00:56:56.970458 | orchestrator | 2026-01-02 00:56:56.970463 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2026-01-02 00:56:56.970467 | orchestrator | Friday 02 January 2026 00:54:37 +0000 (0:00:03.892) 0:08:32.010 ******** 2026-01-02 00:56:56.970472 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-02 00:56:56.970477 | orchestrator | 2026-01-02 00:56:56.970482 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2026-01-02 00:56:56.970487 | orchestrator | Friday 02 January 2026 00:54:39 +0000 (0:00:01.882) 0:08:33.892 ******** 2026-01-02 00:56:56.970492 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:56:56.970497 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:56:56.970501 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:56:56.970506 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:56:56.970511 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:56:56.970516 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:56:56.970524 | orchestrator | 2026-01-02 00:56:56.970547 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2026-01-02 00:56:56.970552 | orchestrator | Friday 02 January 2026 00:54:41 +0000 (0:00:01.797) 0:08:35.690 ******** 2026-01-02 00:56:56.970557 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:56:56.970562 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:56:56.970566 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:56:56.970571 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:56:56.970576 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:56:56.970580 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:56:56.970585 | orchestrator | 2026-01-02 00:56:56.970590 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2026-01-02 00:56:56.970595 | orchestrator | Friday 02 January 2026 00:54:42 +0000 (0:00:00.958) 0:08:36.649 ******** 2026-01-02 00:56:56.970600 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:56:56.970607 | orchestrator | 2026-01-02 00:56:56.970611 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2026-01-02 00:56:56.970616 | orchestrator | Friday 02 January 2026 00:54:43 +0000 (0:00:01.286) 0:08:37.935 ******** 2026-01-02 00:56:56.970621 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:56:56.970663 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:56:56.970668 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:56:56.970673 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:56:56.970678 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:56:56.970682 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:56:56.970687 | orchestrator | 2026-01-02 00:56:56.970692 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2026-01-02 00:56:56.970697 | orchestrator | Friday 02 January 2026 00:54:45 +0000 (0:00:01.692) 0:08:39.627 ******** 2026-01-02 00:56:56.970702 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:56:56.970706 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:56:56.970711 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:56:56.970716 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:56:56.970721 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:56:56.970725 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:56:56.970730 | orchestrator | 2026-01-02 00:56:56.970735 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2026-01-02 00:56:56.970739 | orchestrator | Friday 02 January 2026 00:54:49 +0000 (0:00:03.428) 0:08:43.055 ******** 2026-01-02 00:56:56.970745 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:56:56.970749 | orchestrator | 2026-01-02 00:56:56.970754 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2026-01-02 00:56:56.970759 | orchestrator | Friday 02 January 2026 00:54:50 +0000 (0:00:01.332) 0:08:44.388 ******** 2026-01-02 00:56:56.970764 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:56:56.970769 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:56:56.970773 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:56:56.970778 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:56:56.970783 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:56:56.970788 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:56:56.970792 | orchestrator | 2026-01-02 00:56:56.970797 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2026-01-02 00:56:56.970805 | orchestrator | Friday 02 January 2026 00:54:51 +0000 (0:00:00.951) 0:08:45.339 ******** 2026-01-02 00:56:56.970810 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:56:56.970815 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:56:56.970819 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:56:56.970824 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:56:56.970829 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:56:56.970834 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:56:56.970838 | orchestrator | 2026-01-02 00:56:56.970843 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2026-01-02 00:56:56.970848 | orchestrator | Friday 02 January 2026 00:54:53 +0000 (0:00:02.397) 0:08:47.737 ******** 2026-01-02 00:56:56.970853 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:56:56.970858 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:56:56.970862 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:56:56.970867 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:56:56.970872 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:56:56.970877 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:56:56.970881 | orchestrator | 2026-01-02 00:56:56.970886 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2026-01-02 00:56:56.970891 | orchestrator | 2026-01-02 00:56:56.970896 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-01-02 00:56:56.970901 | orchestrator | Friday 02 January 2026 00:54:54 +0000 (0:00:01.139) 0:08:48.876 ******** 2026-01-02 00:56:56.970905 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-02 00:56:56.970910 | orchestrator | 2026-01-02 00:56:56.970915 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-01-02 00:56:56.970920 | orchestrator | Friday 02 January 2026 00:54:55 +0000 (0:00:00.451) 0:08:49.328 ******** 2026-01-02 00:56:56.970930 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-02 00:56:56.970934 | orchestrator | 2026-01-02 00:56:56.970939 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-01-02 00:56:56.970944 | orchestrator | Friday 02 January 2026 00:54:55 +0000 (0:00:00.661) 0:08:49.990 ******** 2026-01-02 00:56:56.970949 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:56:56.970954 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:56:56.970959 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:56:56.970963 | orchestrator | 2026-01-02 00:56:56.970968 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-01-02 00:56:56.970973 | orchestrator | Friday 02 January 2026 00:54:56 +0000 (0:00:00.270) 0:08:50.260 ******** 2026-01-02 00:56:56.970978 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:56:56.970982 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:56:56.970990 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:56:56.970995 | orchestrator | 2026-01-02 00:56:56.971000 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-01-02 00:56:56.971004 | orchestrator | Friday 02 January 2026 00:54:56 +0000 (0:00:00.694) 0:08:50.955 ******** 2026-01-02 00:56:56.971009 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:56:56.971014 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:56:56.971019 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:56:56.971023 | orchestrator | 2026-01-02 00:56:56.971028 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-01-02 00:56:56.971033 | orchestrator | Friday 02 January 2026 00:54:57 +0000 (0:00:00.893) 0:08:51.848 ******** 2026-01-02 00:56:56.971038 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:56:56.971042 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:56:56.971047 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:56:56.971052 | orchestrator | 2026-01-02 00:56:56.971057 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-01-02 00:56:56.971061 | orchestrator | Friday 02 January 2026 00:54:58 +0000 (0:00:00.694) 0:08:52.543 ******** 2026-01-02 00:56:56.971066 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:56:56.971071 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:56:56.971076 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:56:56.971080 | orchestrator | 2026-01-02 00:56:56.971085 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-01-02 00:56:56.971090 | orchestrator | Friday 02 January 2026 00:54:58 +0000 (0:00:00.274) 0:08:52.817 ******** 2026-01-02 00:56:56.971095 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:56:56.971100 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:56:56.971104 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:56:56.971109 | orchestrator | 2026-01-02 00:56:56.971114 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-01-02 00:56:56.971119 | orchestrator | Friday 02 January 2026 00:54:59 +0000 (0:00:00.258) 0:08:53.075 ******** 2026-01-02 00:56:56.971123 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:56:56.971128 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:56:56.971133 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:56:56.971138 | orchestrator | 2026-01-02 00:56:56.971142 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-01-02 00:56:56.971147 | orchestrator | Friday 02 January 2026 00:54:59 +0000 (0:00:00.472) 0:08:53.547 ******** 2026-01-02 00:56:56.971152 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:56:56.971157 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:56:56.971162 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:56:56.971166 | orchestrator | 2026-01-02 00:56:56.971171 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-01-02 00:56:56.971176 | orchestrator | Friday 02 January 2026 00:55:00 +0000 (0:00:00.731) 0:08:54.279 ******** 2026-01-02 00:56:56.971181 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:56:56.971185 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:56:56.971195 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:56:56.971199 | orchestrator | 2026-01-02 00:56:56.971204 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-01-02 00:56:56.971209 | orchestrator | Friday 02 January 2026 00:55:01 +0000 (0:00:00.798) 0:08:55.077 ******** 2026-01-02 00:56:56.971214 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:56:56.971219 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:56:56.971224 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:56:56.971228 | orchestrator | 2026-01-02 00:56:56.971233 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-01-02 00:56:56.971240 | orchestrator | Friday 02 January 2026 00:55:01 +0000 (0:00:00.348) 0:08:55.425 ******** 2026-01-02 00:56:56.971246 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:56:56.971250 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:56:56.971255 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:56:56.971260 | orchestrator | 2026-01-02 00:56:56.971265 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-01-02 00:56:56.971270 | orchestrator | Friday 02 January 2026 00:55:01 +0000 (0:00:00.568) 0:08:55.994 ******** 2026-01-02 00:56:56.971274 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:56:56.971279 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:56:56.971284 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:56:56.971289 | orchestrator | 2026-01-02 00:56:56.971294 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-01-02 00:56:56.971299 | orchestrator | Friday 02 January 2026 00:55:02 +0000 (0:00:00.353) 0:08:56.347 ******** 2026-01-02 00:56:56.971303 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:56:56.971308 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:56:56.971313 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:56:56.971318 | orchestrator | 2026-01-02 00:56:56.971322 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-01-02 00:56:56.971327 | orchestrator | Friday 02 January 2026 00:55:02 +0000 (0:00:00.359) 0:08:56.707 ******** 2026-01-02 00:56:56.971332 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:56:56.971337 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:56:56.971342 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:56:56.971346 | orchestrator | 2026-01-02 00:56:56.971351 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-01-02 00:56:56.971356 | orchestrator | Friday 02 January 2026 00:55:02 +0000 (0:00:00.345) 0:08:57.053 ******** 2026-01-02 00:56:56.971361 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:56:56.971365 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:56:56.971370 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:56:56.971375 | orchestrator | 2026-01-02 00:56:56.971380 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-01-02 00:56:56.971384 | orchestrator | Friday 02 January 2026 00:55:03 +0000 (0:00:00.519) 0:08:57.572 ******** 2026-01-02 00:56:56.971389 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:56:56.971394 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:56:56.971399 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:56:56.971403 | orchestrator | 2026-01-02 00:56:56.971408 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-01-02 00:56:56.971413 | orchestrator | Friday 02 January 2026 00:55:03 +0000 (0:00:00.345) 0:08:57.918 ******** 2026-01-02 00:56:56.971418 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:56:56.971423 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:56:56.971427 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:56:56.971432 | orchestrator | 2026-01-02 00:56:56.971440 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-01-02 00:56:56.971445 | orchestrator | Friday 02 January 2026 00:55:04 +0000 (0:00:00.362) 0:08:58.280 ******** 2026-01-02 00:56:56.971450 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:56:56.971455 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:56:56.971460 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:56:56.971464 | orchestrator | 2026-01-02 00:56:56.971477 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-01-02 00:56:56.971482 | orchestrator | Friday 02 January 2026 00:55:04 +0000 (0:00:00.319) 0:08:58.600 ******** 2026-01-02 00:56:56.971487 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:56:56.971492 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:56:56.971496 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:56:56.971501 | orchestrator | 2026-01-02 00:56:56.971506 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2026-01-02 00:56:56.971511 | orchestrator | Friday 02 January 2026 00:55:05 +0000 (0:00:00.821) 0:08:59.421 ******** 2026-01-02 00:56:56.971516 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:56:56.971521 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:56:56.971525 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2026-01-02 00:56:56.971661 | orchestrator | 2026-01-02 00:56:56.971667 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2026-01-02 00:56:56.971672 | orchestrator | Friday 02 January 2026 00:55:05 +0000 (0:00:00.541) 0:08:59.962 ******** 2026-01-02 00:56:56.971678 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-02 00:56:56.971682 | orchestrator | 2026-01-02 00:56:56.971687 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2026-01-02 00:56:56.971692 | orchestrator | Friday 02 January 2026 00:55:08 +0000 (0:00:02.335) 0:09:02.297 ******** 2026-01-02 00:56:56.971698 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2026-01-02 00:56:56.971705 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:56:56.971710 | orchestrator | 2026-01-02 00:56:56.971714 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2026-01-02 00:56:56.971719 | orchestrator | Friday 02 January 2026 00:55:08 +0000 (0:00:00.180) 0:09:02.478 ******** 2026-01-02 00:56:56.971726 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-01-02 00:56:56.971737 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-01-02 00:56:56.971742 | orchestrator | 2026-01-02 00:56:56.971753 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2026-01-02 00:56:56.971758 | orchestrator | Friday 02 January 2026 00:55:16 +0000 (0:00:08.274) 0:09:10.752 ******** 2026-01-02 00:56:56.971763 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-02 00:56:56.971768 | orchestrator | 2026-01-02 00:56:56.971773 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2026-01-02 00:56:56.971777 | orchestrator | Friday 02 January 2026 00:55:20 +0000 (0:00:04.025) 0:09:14.777 ******** 2026-01-02 00:56:56.971783 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-02 00:56:56.971787 | orchestrator | 2026-01-02 00:56:56.971792 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2026-01-02 00:56:56.971797 | orchestrator | Friday 02 January 2026 00:55:21 +0000 (0:00:00.604) 0:09:15.382 ******** 2026-01-02 00:56:56.971802 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2026-01-02 00:56:56.971806 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2026-01-02 00:56:56.971811 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2026-01-02 00:56:56.971816 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2026-01-02 00:56:56.971827 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2026-01-02 00:56:56.971832 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2026-01-02 00:56:56.971837 | orchestrator | 2026-01-02 00:56:56.971842 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2026-01-02 00:56:56.971846 | orchestrator | Friday 02 January 2026 00:55:22 +0000 (0:00:01.104) 0:09:16.487 ******** 2026-01-02 00:56:56.971851 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-02 00:56:56.971856 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-01-02 00:56:56.971861 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-01-02 00:56:56.971866 | orchestrator | 2026-01-02 00:56:56.971871 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2026-01-02 00:56:56.971875 | orchestrator | Friday 02 January 2026 00:55:24 +0000 (0:00:02.130) 0:09:18.618 ******** 2026-01-02 00:56:56.971880 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-01-02 00:56:56.971885 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-01-02 00:56:56.971890 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:56:56.971894 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-01-02 00:56:56.971902 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-01-02 00:56:56.971907 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:56:56.971911 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-01-02 00:56:56.971916 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-01-02 00:56:56.971920 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:56:56.971925 | orchestrator | 2026-01-02 00:56:56.971929 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2026-01-02 00:56:56.971934 | orchestrator | Friday 02 January 2026 00:55:25 +0000 (0:00:01.320) 0:09:19.938 ******** 2026-01-02 00:56:56.971938 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:56:56.971943 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:56:56.971947 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:56:56.971952 | orchestrator | 2026-01-02 00:56:56.971956 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2026-01-02 00:56:56.971961 | orchestrator | Friday 02 January 2026 00:55:28 +0000 (0:00:02.925) 0:09:22.864 ******** 2026-01-02 00:56:56.971965 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:56:56.971970 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:56:56.971974 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:56:56.971979 | orchestrator | 2026-01-02 00:56:56.971983 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2026-01-02 00:56:56.971988 | orchestrator | Friday 02 January 2026 00:55:29 +0000 (0:00:00.339) 0:09:23.204 ******** 2026-01-02 00:56:56.971993 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-02 00:56:56.971997 | orchestrator | 2026-01-02 00:56:56.972002 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2026-01-02 00:56:56.972006 | orchestrator | Friday 02 January 2026 00:55:29 +0000 (0:00:00.832) 0:09:24.036 ******** 2026-01-02 00:56:56.972011 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-02 00:56:56.972015 | orchestrator | 2026-01-02 00:56:56.972020 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2026-01-02 00:56:56.972024 | orchestrator | Friday 02 January 2026 00:55:30 +0000 (0:00:00.500) 0:09:24.536 ******** 2026-01-02 00:56:56.972029 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:56:56.972033 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:56:56.972038 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:56:56.972042 | orchestrator | 2026-01-02 00:56:56.972047 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2026-01-02 00:56:56.972052 | orchestrator | Friday 02 January 2026 00:55:31 +0000 (0:00:01.291) 0:09:25.827 ******** 2026-01-02 00:56:56.972060 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:56:56.972064 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:56:56.972069 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:56:56.972073 | orchestrator | 2026-01-02 00:56:56.972078 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2026-01-02 00:56:56.972082 | orchestrator | Friday 02 January 2026 00:55:33 +0000 (0:00:01.591) 0:09:27.419 ******** 2026-01-02 00:56:56.972087 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:56:56.972091 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:56:56.972096 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:56:56.972100 | orchestrator | 2026-01-02 00:56:56.972105 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2026-01-02 00:56:56.972112 | orchestrator | Friday 02 January 2026 00:55:35 +0000 (0:00:01.954) 0:09:29.374 ******** 2026-01-02 00:56:56.972117 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:56:56.972122 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:56:56.972126 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:56:56.972130 | orchestrator | 2026-01-02 00:56:56.972135 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2026-01-02 00:56:56.972140 | orchestrator | Friday 02 January 2026 00:55:37 +0000 (0:00:02.109) 0:09:31.483 ******** 2026-01-02 00:56:56.972144 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:56:56.972149 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:56:56.972153 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:56:56.972158 | orchestrator | 2026-01-02 00:56:56.972162 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-01-02 00:56:56.972167 | orchestrator | Friday 02 January 2026 00:55:39 +0000 (0:00:01.744) 0:09:33.228 ******** 2026-01-02 00:56:56.972172 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:56:56.972176 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:56:56.972180 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:56:56.972185 | orchestrator | 2026-01-02 00:56:56.972189 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-01-02 00:56:56.972194 | orchestrator | Friday 02 January 2026 00:55:39 +0000 (0:00:00.698) 0:09:33.926 ******** 2026-01-02 00:56:56.972198 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-02 00:56:56.972203 | orchestrator | 2026-01-02 00:56:56.972208 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-01-02 00:56:56.972212 | orchestrator | Friday 02 January 2026 00:55:40 +0000 (0:00:00.743) 0:09:34.670 ******** 2026-01-02 00:56:56.972217 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:56:56.972221 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:56:56.972226 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:56:56.972230 | orchestrator | 2026-01-02 00:56:56.972235 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-01-02 00:56:56.972239 | orchestrator | Friday 02 January 2026 00:55:40 +0000 (0:00:00.314) 0:09:34.985 ******** 2026-01-02 00:56:56.972244 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:56:56.972248 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:56:56.972253 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:56:56.972257 | orchestrator | 2026-01-02 00:56:56.972262 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-01-02 00:56:56.972267 | orchestrator | Friday 02 January 2026 00:55:42 +0000 (0:00:01.219) 0:09:36.205 ******** 2026-01-02 00:56:56.972271 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-02 00:56:56.972276 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-02 00:56:56.972285 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-02 00:56:56.972289 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:56:56.972297 | orchestrator | 2026-01-02 00:56:56.972301 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-01-02 00:56:56.972306 | orchestrator | Friday 02 January 2026 00:55:42 +0000 (0:00:00.827) 0:09:37.032 ******** 2026-01-02 00:56:56.972314 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:56:56.972319 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:56:56.972323 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:56:56.972328 | orchestrator | 2026-01-02 00:56:56.972332 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-01-02 00:56:56.972337 | orchestrator | 2026-01-02 00:56:56.972342 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-01-02 00:56:56.972346 | orchestrator | Friday 02 January 2026 00:55:43 +0000 (0:00:00.804) 0:09:37.837 ******** 2026-01-02 00:56:56.972351 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-02 00:56:56.972355 | orchestrator | 2026-01-02 00:56:56.972360 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-01-02 00:56:56.972365 | orchestrator | Friday 02 January 2026 00:55:44 +0000 (0:00:00.493) 0:09:38.331 ******** 2026-01-02 00:56:56.972369 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-02 00:56:56.972374 | orchestrator | 2026-01-02 00:56:56.972378 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-01-02 00:56:56.972383 | orchestrator | Friday 02 January 2026 00:55:45 +0000 (0:00:00.775) 0:09:39.106 ******** 2026-01-02 00:56:56.972388 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:56:56.972392 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:56:56.972397 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:56:56.972401 | orchestrator | 2026-01-02 00:56:56.972406 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-01-02 00:56:56.972410 | orchestrator | Friday 02 January 2026 00:55:45 +0000 (0:00:00.328) 0:09:39.434 ******** 2026-01-02 00:56:56.972415 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:56:56.972419 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:56:56.972424 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:56:56.972428 | orchestrator | 2026-01-02 00:56:56.972433 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-01-02 00:56:56.972437 | orchestrator | Friday 02 January 2026 00:55:46 +0000 (0:00:00.746) 0:09:40.181 ******** 2026-01-02 00:56:56.972442 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:56:56.972447 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:56:56.972451 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:56:56.972456 | orchestrator | 2026-01-02 00:56:56.972460 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-01-02 00:56:56.972465 | orchestrator | Friday 02 January 2026 00:55:46 +0000 (0:00:00.728) 0:09:40.910 ******** 2026-01-02 00:56:56.972469 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:56:56.972474 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:56:56.972478 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:56:56.972483 | orchestrator | 2026-01-02 00:56:56.972487 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-01-02 00:56:56.972492 | orchestrator | Friday 02 January 2026 00:55:47 +0000 (0:00:01.002) 0:09:41.912 ******** 2026-01-02 00:56:56.972497 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:56:56.972504 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:56:56.972509 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:56:56.972513 | orchestrator | 2026-01-02 00:56:56.972518 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-01-02 00:56:56.972522 | orchestrator | Friday 02 January 2026 00:55:48 +0000 (0:00:00.321) 0:09:42.233 ******** 2026-01-02 00:56:56.972547 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:56:56.972556 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:56:56.972561 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:56:56.972566 | orchestrator | 2026-01-02 00:56:56.972570 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-01-02 00:56:56.972575 | orchestrator | Friday 02 January 2026 00:55:48 +0000 (0:00:00.323) 0:09:42.556 ******** 2026-01-02 00:56:56.972579 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:56:56.972588 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:56:56.972592 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:56:56.972597 | orchestrator | 2026-01-02 00:56:56.972601 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-01-02 00:56:56.972606 | orchestrator | Friday 02 January 2026 00:55:48 +0000 (0:00:00.307) 0:09:42.864 ******** 2026-01-02 00:56:56.972610 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:56:56.972615 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:56:56.972620 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:56:56.972624 | orchestrator | 2026-01-02 00:56:56.972629 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-01-02 00:56:56.972633 | orchestrator | Friday 02 January 2026 00:55:49 +0000 (0:00:01.078) 0:09:43.943 ******** 2026-01-02 00:56:56.972638 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:56:56.972642 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:56:56.972647 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:56:56.972651 | orchestrator | 2026-01-02 00:56:56.972656 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-01-02 00:56:56.972660 | orchestrator | Friday 02 January 2026 00:55:50 +0000 (0:00:00.782) 0:09:44.725 ******** 2026-01-02 00:56:56.972665 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:56:56.972669 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:56:56.972674 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:56:56.972679 | orchestrator | 2026-01-02 00:56:56.972683 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-01-02 00:56:56.972688 | orchestrator | Friday 02 January 2026 00:55:50 +0000 (0:00:00.316) 0:09:45.042 ******** 2026-01-02 00:56:56.972692 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:56:56.972697 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:56:56.972701 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:56:56.972706 | orchestrator | 2026-01-02 00:56:56.972710 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-01-02 00:56:56.972718 | orchestrator | Friday 02 January 2026 00:55:51 +0000 (0:00:00.305) 0:09:45.347 ******** 2026-01-02 00:56:56.972723 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:56:56.972727 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:56:56.972732 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:56:56.972736 | orchestrator | 2026-01-02 00:56:56.972741 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-01-02 00:56:56.972746 | orchestrator | Friday 02 January 2026 00:55:51 +0000 (0:00:00.565) 0:09:45.913 ******** 2026-01-02 00:56:56.972750 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:56:56.972755 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:56:56.972759 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:56:56.972764 | orchestrator | 2026-01-02 00:56:56.972768 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-01-02 00:56:56.972773 | orchestrator | Friday 02 January 2026 00:55:52 +0000 (0:00:00.343) 0:09:46.257 ******** 2026-01-02 00:56:56.972777 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:56:56.972782 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:56:56.972786 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:56:56.972791 | orchestrator | 2026-01-02 00:56:56.972795 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-01-02 00:56:56.972800 | orchestrator | Friday 02 January 2026 00:55:52 +0000 (0:00:00.358) 0:09:46.616 ******** 2026-01-02 00:56:56.972804 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:56:56.972809 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:56:56.972813 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:56:56.972818 | orchestrator | 2026-01-02 00:56:56.972823 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-01-02 00:56:56.972827 | orchestrator | Friday 02 January 2026 00:55:52 +0000 (0:00:00.316) 0:09:46.933 ******** 2026-01-02 00:56:56.972832 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:56:56.972836 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:56:56.972841 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:56:56.972849 | orchestrator | 2026-01-02 00:56:56.972853 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-01-02 00:56:56.972858 | orchestrator | Friday 02 January 2026 00:55:53 +0000 (0:00:00.555) 0:09:47.488 ******** 2026-01-02 00:56:56.972863 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:56:56.972867 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:56:56.972872 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:56:56.972876 | orchestrator | 2026-01-02 00:56:56.972881 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-01-02 00:56:56.972886 | orchestrator | Friday 02 January 2026 00:55:53 +0000 (0:00:00.303) 0:09:47.792 ******** 2026-01-02 00:56:56.972890 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:56:56.972895 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:56:56.972899 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:56:56.972904 | orchestrator | 2026-01-02 00:56:56.972908 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-01-02 00:56:56.972913 | orchestrator | Friday 02 January 2026 00:55:54 +0000 (0:00:00.348) 0:09:48.141 ******** 2026-01-02 00:56:56.972917 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:56:56.972922 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:56:56.972927 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:56:56.972931 | orchestrator | 2026-01-02 00:56:56.972936 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-01-02 00:56:56.972940 | orchestrator | Friday 02 January 2026 00:55:54 +0000 (0:00:00.767) 0:09:48.908 ******** 2026-01-02 00:56:56.972948 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-02 00:56:56.972953 | orchestrator | 2026-01-02 00:56:56.972957 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-01-02 00:56:56.972962 | orchestrator | Friday 02 January 2026 00:55:55 +0000 (0:00:00.574) 0:09:49.483 ******** 2026-01-02 00:56:56.972967 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-02 00:56:56.972971 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-01-02 00:56:56.972976 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-01-02 00:56:56.972980 | orchestrator | 2026-01-02 00:56:56.972985 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-01-02 00:56:56.972989 | orchestrator | Friday 02 January 2026 00:55:57 +0000 (0:00:02.227) 0:09:51.711 ******** 2026-01-02 00:56:56.972994 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-01-02 00:56:56.972999 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-01-02 00:56:56.973003 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:56:56.973008 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-01-02 00:56:56.973012 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-01-02 00:56:56.973017 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:56:56.973021 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-01-02 00:56:56.973026 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-01-02 00:56:56.973030 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:56:56.973035 | orchestrator | 2026-01-02 00:56:56.973039 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-01-02 00:56:56.973044 | orchestrator | Friday 02 January 2026 00:55:58 +0000 (0:00:01.302) 0:09:53.014 ******** 2026-01-02 00:56:56.973049 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:56:56.973053 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:56:56.973058 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:56:56.973062 | orchestrator | 2026-01-02 00:56:56.973067 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-01-02 00:56:56.973071 | orchestrator | Friday 02 January 2026 00:55:59 +0000 (0:00:00.652) 0:09:53.666 ******** 2026-01-02 00:56:56.973076 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-02 00:56:56.973084 | orchestrator | 2026-01-02 00:56:56.973089 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-01-02 00:56:56.973093 | orchestrator | Friday 02 January 2026 00:56:00 +0000 (0:00:00.540) 0:09:54.207 ******** 2026-01-02 00:56:56.973101 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-01-02 00:56:56.973106 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-01-02 00:56:56.973110 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-01-02 00:56:56.973115 | orchestrator | 2026-01-02 00:56:56.973120 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-01-02 00:56:56.973124 | orchestrator | Friday 02 January 2026 00:56:00 +0000 (0:00:00.817) 0:09:55.024 ******** 2026-01-02 00:56:56.973129 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-02 00:56:56.973133 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-01-02 00:56:56.973138 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-02 00:56:56.973142 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-02 00:56:56.973147 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-01-02 00:56:56.973152 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-01-02 00:56:56.973157 | orchestrator | 2026-01-02 00:56:56.973161 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-01-02 00:56:56.973166 | orchestrator | Friday 02 January 2026 00:56:05 +0000 (0:00:04.804) 0:09:59.829 ******** 2026-01-02 00:56:56.973170 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-02 00:56:56.973175 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-01-02 00:56:56.973179 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-02 00:56:56.973184 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-01-02 00:56:56.973188 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-02 00:56:56.973193 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-01-02 00:56:56.973198 | orchestrator | 2026-01-02 00:56:56.973202 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-01-02 00:56:56.973207 | orchestrator | Friday 02 January 2026 00:56:08 +0000 (0:00:02.321) 0:10:02.150 ******** 2026-01-02 00:56:56.973211 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-01-02 00:56:56.973216 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:56:56.973220 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-01-02 00:56:56.973225 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:56:56.973230 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-01-02 00:56:56.973234 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:56:56.973239 | orchestrator | 2026-01-02 00:56:56.973246 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-01-02 00:56:56.973251 | orchestrator | Friday 02 January 2026 00:56:09 +0000 (0:00:01.301) 0:10:03.452 ******** 2026-01-02 00:56:56.973255 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2026-01-02 00:56:56.973260 | orchestrator | 2026-01-02 00:56:56.973265 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-01-02 00:56:56.973269 | orchestrator | Friday 02 January 2026 00:56:09 +0000 (0:00:00.221) 0:10:03.673 ******** 2026-01-02 00:56:56.973278 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-02 00:56:56.973283 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-02 00:56:56.973287 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-02 00:56:56.973292 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-02 00:56:56.973296 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-02 00:56:56.973301 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:56:56.973306 | orchestrator | 2026-01-02 00:56:56.973310 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-01-02 00:56:56.973315 | orchestrator | Friday 02 January 2026 00:56:10 +0000 (0:00:01.007) 0:10:04.681 ******** 2026-01-02 00:56:56.973319 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-02 00:56:56.973324 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-02 00:56:56.973328 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-02 00:56:56.973336 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-02 00:56:56.973341 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-02 00:56:56.973345 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:56:56.973350 | orchestrator | 2026-01-02 00:56:56.973354 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-01-02 00:56:56.973359 | orchestrator | Friday 02 January 2026 00:56:11 +0000 (0:00:01.013) 0:10:05.695 ******** 2026-01-02 00:56:56.973364 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-01-02 00:56:56.973368 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-01-02 00:56:56.973373 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-01-02 00:56:56.973377 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-01-02 00:56:56.973382 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-01-02 00:56:56.973386 | orchestrator | 2026-01-02 00:56:56.973391 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-01-02 00:56:56.973396 | orchestrator | Friday 02 January 2026 00:56:42 +0000 (0:00:31.089) 0:10:36.784 ******** 2026-01-02 00:56:56.973400 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:56:56.973405 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:56:56.973409 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:56:56.973414 | orchestrator | 2026-01-02 00:56:56.973419 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-01-02 00:56:56.973423 | orchestrator | Friday 02 January 2026 00:56:43 +0000 (0:00:00.335) 0:10:37.119 ******** 2026-01-02 00:56:56.973428 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:56:56.973436 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:56:56.973440 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:56:56.973445 | orchestrator | 2026-01-02 00:56:56.973449 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-01-02 00:56:56.973454 | orchestrator | Friday 02 January 2026 00:56:43 +0000 (0:00:00.294) 0:10:37.414 ******** 2026-01-02 00:56:56.973458 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-02 00:56:56.973463 | orchestrator | 2026-01-02 00:56:56.973468 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-01-02 00:56:56.973472 | orchestrator | Friday 02 January 2026 00:56:44 +0000 (0:00:00.732) 0:10:38.146 ******** 2026-01-02 00:56:56.973479 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-02 00:56:56.973484 | orchestrator | 2026-01-02 00:56:56.973489 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-01-02 00:56:56.973493 | orchestrator | Friday 02 January 2026 00:56:44 +0000 (0:00:00.508) 0:10:38.655 ******** 2026-01-02 00:56:56.973498 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:56:56.973502 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:56:56.973507 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:56:56.973511 | orchestrator | 2026-01-02 00:56:56.973516 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-01-02 00:56:56.973520 | orchestrator | Friday 02 January 2026 00:56:45 +0000 (0:00:01.272) 0:10:39.927 ******** 2026-01-02 00:56:56.973525 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:56:56.973542 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:56:56.973547 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:56:56.973552 | orchestrator | 2026-01-02 00:56:56.973556 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-01-02 00:56:56.973561 | orchestrator | Friday 02 January 2026 00:56:47 +0000 (0:00:01.483) 0:10:41.411 ******** 2026-01-02 00:56:56.973565 | orchestrator | changed: [testbed-node-4] 2026-01-02 00:56:56.973570 | orchestrator | changed: [testbed-node-3] 2026-01-02 00:56:56.973574 | orchestrator | changed: [testbed-node-5] 2026-01-02 00:56:56.973579 | orchestrator | 2026-01-02 00:56:56.973583 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-01-02 00:56:56.973588 | orchestrator | Friday 02 January 2026 00:56:49 +0000 (0:00:01.945) 0:10:43.356 ******** 2026-01-02 00:56:56.973593 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-01-02 00:56:56.973597 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-01-02 00:56:56.973602 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-01-02 00:56:56.973606 | orchestrator | 2026-01-02 00:56:56.973611 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-01-02 00:56:56.973616 | orchestrator | Friday 02 January 2026 00:56:51 +0000 (0:00:02.664) 0:10:46.021 ******** 2026-01-02 00:56:56.973620 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:56:56.973625 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:56:56.973629 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:56:56.973634 | orchestrator | 2026-01-02 00:56:56.973638 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-01-02 00:56:56.973646 | orchestrator | Friday 02 January 2026 00:56:52 +0000 (0:00:00.331) 0:10:46.352 ******** 2026-01-02 00:56:56.973650 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-02 00:56:56.973655 | orchestrator | 2026-01-02 00:56:56.973660 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-01-02 00:56:56.973664 | orchestrator | Friday 02 January 2026 00:56:52 +0000 (0:00:00.497) 0:10:46.850 ******** 2026-01-02 00:56:56.973672 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:56:56.973677 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:56:56.973681 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:56:56.973686 | orchestrator | 2026-01-02 00:56:56.973690 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-01-02 00:56:56.973695 | orchestrator | Friday 02 January 2026 00:56:53 +0000 (0:00:00.549) 0:10:47.400 ******** 2026-01-02 00:56:56.973699 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:56:56.973704 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:56:56.973708 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:56:56.973713 | orchestrator | 2026-01-02 00:56:56.973717 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-01-02 00:56:56.973722 | orchestrator | Friday 02 January 2026 00:56:53 +0000 (0:00:00.329) 0:10:47.730 ******** 2026-01-02 00:56:56.973726 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-02 00:56:56.973731 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-02 00:56:56.973736 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-02 00:56:56.973740 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:56:56.973745 | orchestrator | 2026-01-02 00:56:56.973749 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-01-02 00:56:56.973754 | orchestrator | Friday 02 January 2026 00:56:54 +0000 (0:00:00.603) 0:10:48.334 ******** 2026-01-02 00:56:56.973758 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:56:56.973763 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:56:56.973768 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:56:56.973772 | orchestrator | 2026-01-02 00:56:56.973777 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-02 00:56:56.973781 | orchestrator | testbed-node-0 : ok=134  changed=35  unreachable=0 failed=0 skipped=125  rescued=0 ignored=0 2026-01-02 00:56:56.973786 | orchestrator | testbed-node-1 : ok=127  changed=31  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2026-01-02 00:56:56.973791 | orchestrator | testbed-node-2 : ok=134  changed=33  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2026-01-02 00:56:56.973795 | orchestrator | testbed-node-3 : ok=193  changed=45  unreachable=0 failed=0 skipped=162  rescued=0 ignored=0 2026-01-02 00:56:56.973800 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2026-01-02 00:56:56.973808 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2026-01-02 00:56:56.973812 | orchestrator | 2026-01-02 00:56:56.973817 | orchestrator | 2026-01-02 00:56:56.973822 | orchestrator | 2026-01-02 00:56:56.973826 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-02 00:56:56.973831 | orchestrator | Friday 02 January 2026 00:56:54 +0000 (0:00:00.260) 0:10:48.594 ******** 2026-01-02 00:56:56.973835 | orchestrator | =============================================================================== 2026-01-02 00:56:56.973840 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 51.34s 2026-01-02 00:56:56.973844 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 44.24s 2026-01-02 00:56:56.973849 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 36.13s 2026-01-02 00:56:56.973854 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 31.09s 2026-01-02 00:56:56.973858 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 22.03s 2026-01-02 00:56:56.973863 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 15.52s 2026-01-02 00:56:56.973867 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 12.60s 2026-01-02 00:56:56.973875 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node -------------------- 11.32s 2026-01-02 00:56:56.973880 | orchestrator | ceph-mon : Fetch ceph initial keys ------------------------------------- 10.27s 2026-01-02 00:56:56.973884 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 8.27s 2026-01-02 00:56:56.973889 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 7.07s 2026-01-02 00:56:56.973893 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 6.81s 2026-01-02 00:56:56.973898 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 5.07s 2026-01-02 00:56:56.973903 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 4.80s 2026-01-02 00:56:56.973907 | orchestrator | ceph-mds : Create ceph filesystem --------------------------------------- 4.03s 2026-01-02 00:56:56.973912 | orchestrator | ceph-osd : Systemd start osd -------------------------------------------- 3.90s 2026-01-02 00:56:56.973916 | orchestrator | ceph-crash : Create client.crash keyring -------------------------------- 3.89s 2026-01-02 00:56:56.973924 | orchestrator | ceph-osd : Apply operating system tuning -------------------------------- 3.52s 2026-01-02 00:56:56.973928 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 3.43s 2026-01-02 00:56:56.973933 | orchestrator | ceph-crash : Start the ceph-crash service ------------------------------- 3.43s 2026-01-02 00:56:56.973937 | orchestrator | 2026-01-02 00:56:56 | INFO  | Task 88215152-d7bf-465a-8a27-620f62a6487c is in state STARTED 2026-01-02 00:56:56.973942 | orchestrator | 2026-01-02 00:56:56 | INFO  | Task 38e35a80-1ec0-4718-935c-1f1c6e56c7a5 is in state STARTED 2026-01-02 00:56:56.973947 | orchestrator | 2026-01-02 00:56:56 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:57:00.003123 | orchestrator | 2026-01-02 00:57:00 | INFO  | Task cb2d8017-1882-4a99-a5a8-bd31076669d9 is in state STARTED 2026-01-02 00:57:00.003240 | orchestrator | 2026-01-02 00:57:00 | INFO  | Task 88215152-d7bf-465a-8a27-620f62a6487c is in state STARTED 2026-01-02 00:57:00.003267 | orchestrator | 2026-01-02 00:57:00 | INFO  | Task 38e35a80-1ec0-4718-935c-1f1c6e56c7a5 is in state STARTED 2026-01-02 00:57:00.003287 | orchestrator | 2026-01-02 00:57:00 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:57:03.045884 | orchestrator | 2026-01-02 00:57:03 | INFO  | Task cb2d8017-1882-4a99-a5a8-bd31076669d9 is in state STARTED 2026-01-02 00:57:03.047974 | orchestrator | 2026-01-02 00:57:03 | INFO  | Task 88215152-d7bf-465a-8a27-620f62a6487c is in state STARTED 2026-01-02 00:57:03.049928 | orchestrator | 2026-01-02 00:57:03 | INFO  | Task 38e35a80-1ec0-4718-935c-1f1c6e56c7a5 is in state STARTED 2026-01-02 00:57:03.050293 | orchestrator | 2026-01-02 00:57:03 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:57:06.089504 | orchestrator | 2026-01-02 00:57:06 | INFO  | Task cb2d8017-1882-4a99-a5a8-bd31076669d9 is in state STARTED 2026-01-02 00:57:06.090119 | orchestrator | 2026-01-02 00:57:06 | INFO  | Task 88215152-d7bf-465a-8a27-620f62a6487c is in state STARTED 2026-01-02 00:57:06.090704 | orchestrator | 2026-01-02 00:57:06 | INFO  | Task 38e35a80-1ec0-4718-935c-1f1c6e56c7a5 is in state STARTED 2026-01-02 00:57:06.090802 | orchestrator | 2026-01-02 00:57:06 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:57:09.137014 | orchestrator | 2026-01-02 00:57:09 | INFO  | Task cb2d8017-1882-4a99-a5a8-bd31076669d9 is in state STARTED 2026-01-02 00:57:09.138313 | orchestrator | 2026-01-02 00:57:09 | INFO  | Task 88215152-d7bf-465a-8a27-620f62a6487c is in state STARTED 2026-01-02 00:57:09.139436 | orchestrator | 2026-01-02 00:57:09 | INFO  | Task 38e35a80-1ec0-4718-935c-1f1c6e56c7a5 is in state STARTED 2026-01-02 00:57:09.139459 | orchestrator | 2026-01-02 00:57:09 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:57:12.176895 | orchestrator | 2026-01-02 00:57:12 | INFO  | Task cb2d8017-1882-4a99-a5a8-bd31076669d9 is in state STARTED 2026-01-02 00:57:12.177900 | orchestrator | 2026-01-02 00:57:12 | INFO  | Task 88215152-d7bf-465a-8a27-620f62a6487c is in state STARTED 2026-01-02 00:57:12.179958 | orchestrator | 2026-01-02 00:57:12 | INFO  | Task 38e35a80-1ec0-4718-935c-1f1c6e56c7a5 is in state STARTED 2026-01-02 00:57:12.180215 | orchestrator | 2026-01-02 00:57:12 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:57:15.226910 | orchestrator | 2026-01-02 00:57:15 | INFO  | Task cb2d8017-1882-4a99-a5a8-bd31076669d9 is in state STARTED 2026-01-02 00:57:15.228051 | orchestrator | 2026-01-02 00:57:15 | INFO  | Task 88215152-d7bf-465a-8a27-620f62a6487c is in state STARTED 2026-01-02 00:57:15.229597 | orchestrator | 2026-01-02 00:57:15 | INFO  | Task 38e35a80-1ec0-4718-935c-1f1c6e56c7a5 is in state STARTED 2026-01-02 00:57:15.229628 | orchestrator | 2026-01-02 00:57:15 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:57:18.277074 | orchestrator | 2026-01-02 00:57:18 | INFO  | Task cb2d8017-1882-4a99-a5a8-bd31076669d9 is in state STARTED 2026-01-02 00:57:18.277142 | orchestrator | 2026-01-02 00:57:18 | INFO  | Task 88215152-d7bf-465a-8a27-620f62a6487c is in state STARTED 2026-01-02 00:57:18.278068 | orchestrator | 2026-01-02 00:57:18 | INFO  | Task 38e35a80-1ec0-4718-935c-1f1c6e56c7a5 is in state STARTED 2026-01-02 00:57:18.278091 | orchestrator | 2026-01-02 00:57:18 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:57:21.321211 | orchestrator | 2026-01-02 00:57:21 | INFO  | Task cb2d8017-1882-4a99-a5a8-bd31076669d9 is in state STARTED 2026-01-02 00:57:21.324374 | orchestrator | 2026-01-02 00:57:21 | INFO  | Task 88215152-d7bf-465a-8a27-620f62a6487c is in state STARTED 2026-01-02 00:57:21.325565 | orchestrator | 2026-01-02 00:57:21 | INFO  | Task 38e35a80-1ec0-4718-935c-1f1c6e56c7a5 is in state STARTED 2026-01-02 00:57:21.325848 | orchestrator | 2026-01-02 00:57:21 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:57:24.357285 | orchestrator | 2026-01-02 00:57:24 | INFO  | Task cb2d8017-1882-4a99-a5a8-bd31076669d9 is in state STARTED 2026-01-02 00:57:24.358480 | orchestrator | 2026-01-02 00:57:24 | INFO  | Task 88215152-d7bf-465a-8a27-620f62a6487c is in state STARTED 2026-01-02 00:57:24.360103 | orchestrator | 2026-01-02 00:57:24 | INFO  | Task 38e35a80-1ec0-4718-935c-1f1c6e56c7a5 is in state STARTED 2026-01-02 00:57:24.360925 | orchestrator | 2026-01-02 00:57:24 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:57:27.396193 | orchestrator | 2026-01-02 00:57:27 | INFO  | Task cb2d8017-1882-4a99-a5a8-bd31076669d9 is in state STARTED 2026-01-02 00:57:27.398240 | orchestrator | 2026-01-02 00:57:27 | INFO  | Task 88215152-d7bf-465a-8a27-620f62a6487c is in state STARTED 2026-01-02 00:57:27.400690 | orchestrator | 2026-01-02 00:57:27 | INFO  | Task 38e35a80-1ec0-4718-935c-1f1c6e56c7a5 is in state STARTED 2026-01-02 00:57:27.400739 | orchestrator | 2026-01-02 00:57:27 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:57:30.442637 | orchestrator | 2026-01-02 00:57:30 | INFO  | Task cb2d8017-1882-4a99-a5a8-bd31076669d9 is in state STARTED 2026-01-02 00:57:30.444928 | orchestrator | 2026-01-02 00:57:30 | INFO  | Task 88215152-d7bf-465a-8a27-620f62a6487c is in state STARTED 2026-01-02 00:57:30.447471 | orchestrator | 2026-01-02 00:57:30 | INFO  | Task 38e35a80-1ec0-4718-935c-1f1c6e56c7a5 is in state STARTED 2026-01-02 00:57:30.447796 | orchestrator | 2026-01-02 00:57:30 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:57:33.497094 | orchestrator | 2026-01-02 00:57:33 | INFO  | Task cb2d8017-1882-4a99-a5a8-bd31076669d9 is in state STARTED 2026-01-02 00:57:33.497693 | orchestrator | 2026-01-02 00:57:33 | INFO  | Task 88215152-d7bf-465a-8a27-620f62a6487c is in state STARTED 2026-01-02 00:57:33.498793 | orchestrator | 2026-01-02 00:57:33 | INFO  | Task 38e35a80-1ec0-4718-935c-1f1c6e56c7a5 is in state STARTED 2026-01-02 00:57:33.498916 | orchestrator | 2026-01-02 00:57:33 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:57:36.539028 | orchestrator | 2026-01-02 00:57:36 | INFO  | Task cb2d8017-1882-4a99-a5a8-bd31076669d9 is in state STARTED 2026-01-02 00:57:36.540146 | orchestrator | 2026-01-02 00:57:36 | INFO  | Task 88215152-d7bf-465a-8a27-620f62a6487c is in state STARTED 2026-01-02 00:57:36.541685 | orchestrator | 2026-01-02 00:57:36 | INFO  | Task 38e35a80-1ec0-4718-935c-1f1c6e56c7a5 is in state STARTED 2026-01-02 00:57:36.541715 | orchestrator | 2026-01-02 00:57:36 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:57:39.590234 | orchestrator | 2026-01-02 00:57:39 | INFO  | Task cb2d8017-1882-4a99-a5a8-bd31076669d9 is in state STARTED 2026-01-02 00:57:39.591338 | orchestrator | 2026-01-02 00:57:39 | INFO  | Task 88215152-d7bf-465a-8a27-620f62a6487c is in state STARTED 2026-01-02 00:57:39.593265 | orchestrator | 2026-01-02 00:57:39 | INFO  | Task 38e35a80-1ec0-4718-935c-1f1c6e56c7a5 is in state STARTED 2026-01-02 00:57:39.593299 | orchestrator | 2026-01-02 00:57:39 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:57:42.638805 | orchestrator | 2026-01-02 00:57:42 | INFO  | Task cb2d8017-1882-4a99-a5a8-bd31076669d9 is in state STARTED 2026-01-02 00:57:42.640768 | orchestrator | 2026-01-02 00:57:42 | INFO  | Task 88215152-d7bf-465a-8a27-620f62a6487c is in state STARTED 2026-01-02 00:57:42.642973 | orchestrator | 2026-01-02 00:57:42 | INFO  | Task 38e35a80-1ec0-4718-935c-1f1c6e56c7a5 is in state STARTED 2026-01-02 00:57:42.643199 | orchestrator | 2026-01-02 00:57:42 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:57:45.691360 | orchestrator | 2026-01-02 00:57:45 | INFO  | Task cb2d8017-1882-4a99-a5a8-bd31076669d9 is in state STARTED 2026-01-02 00:57:45.691471 | orchestrator | 2026-01-02 00:57:45 | INFO  | Task 88215152-d7bf-465a-8a27-620f62a6487c is in state STARTED 2026-01-02 00:57:45.691688 | orchestrator | 2026-01-02 00:57:45 | INFO  | Task 38e35a80-1ec0-4718-935c-1f1c6e56c7a5 is in state STARTED 2026-01-02 00:57:45.691824 | orchestrator | 2026-01-02 00:57:45 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:57:48.741124 | orchestrator | 2026-01-02 00:57:48 | INFO  | Task cb2d8017-1882-4a99-a5a8-bd31076669d9 is in state STARTED 2026-01-02 00:57:48.743221 | orchestrator | 2026-01-02 00:57:48 | INFO  | Task 88215152-d7bf-465a-8a27-620f62a6487c is in state STARTED 2026-01-02 00:57:48.744520 | orchestrator | 2026-01-02 00:57:48 | INFO  | Task 38e35a80-1ec0-4718-935c-1f1c6e56c7a5 is in state STARTED 2026-01-02 00:57:48.744718 | orchestrator | 2026-01-02 00:57:48 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:57:51.791409 | orchestrator | 2026-01-02 00:57:51 | INFO  | Task cb2d8017-1882-4a99-a5a8-bd31076669d9 is in state STARTED 2026-01-02 00:57:51.791714 | orchestrator | 2026-01-02 00:57:51 | INFO  | Task 88215152-d7bf-465a-8a27-620f62a6487c is in state STARTED 2026-01-02 00:57:51.792536 | orchestrator | 2026-01-02 00:57:51 | INFO  | Task 38e35a80-1ec0-4718-935c-1f1c6e56c7a5 is in state STARTED 2026-01-02 00:57:51.792568 | orchestrator | 2026-01-02 00:57:51 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:57:54.837873 | orchestrator | 2026-01-02 00:57:54 | INFO  | Task cb2d8017-1882-4a99-a5a8-bd31076669d9 is in state STARTED 2026-01-02 00:57:54.838753 | orchestrator | 2026-01-02 00:57:54 | INFO  | Task 88215152-d7bf-465a-8a27-620f62a6487c is in state STARTED 2026-01-02 00:57:54.840080 | orchestrator | 2026-01-02 00:57:54 | INFO  | Task 38e35a80-1ec0-4718-935c-1f1c6e56c7a5 is in state STARTED 2026-01-02 00:57:54.840668 | orchestrator | 2026-01-02 00:57:54 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:57:57.893126 | orchestrator | 2026-01-02 00:57:57 | INFO  | Task cb2d8017-1882-4a99-a5a8-bd31076669d9 is in state STARTED 2026-01-02 00:57:57.896192 | orchestrator | 2026-01-02 00:57:57 | INFO  | Task 88215152-d7bf-465a-8a27-620f62a6487c is in state STARTED 2026-01-02 00:57:57.898317 | orchestrator | 2026-01-02 00:57:57 | INFO  | Task 38e35a80-1ec0-4718-935c-1f1c6e56c7a5 is in state STARTED 2026-01-02 00:57:57.898405 | orchestrator | 2026-01-02 00:57:57 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:58:00.956068 | orchestrator | 2026-01-02 00:58:00 | INFO  | Task cb2d8017-1882-4a99-a5a8-bd31076669d9 is in state SUCCESS 2026-01-02 00:58:00.959151 | orchestrator | 2026-01-02 00:58:00.959206 | orchestrator | 2026-01-02 00:58:00.959219 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-02 00:58:00.959303 | orchestrator | 2026-01-02 00:58:00.959316 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-02 00:58:00.959328 | orchestrator | Friday 02 January 2026 00:54:55 +0000 (0:00:00.230) 0:00:00.230 ******** 2026-01-02 00:58:00.959339 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:58:00.959609 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:58:00.959630 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:58:00.959641 | orchestrator | 2026-01-02 00:58:00.959652 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-02 00:58:00.959664 | orchestrator | Friday 02 January 2026 00:54:55 +0000 (0:00:00.268) 0:00:00.499 ******** 2026-01-02 00:58:00.959675 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2026-01-02 00:58:00.959686 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2026-01-02 00:58:00.959697 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2026-01-02 00:58:00.959708 | orchestrator | 2026-01-02 00:58:00.959719 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2026-01-02 00:58:00.959730 | orchestrator | 2026-01-02 00:58:00.959741 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-01-02 00:58:00.959752 | orchestrator | Friday 02 January 2026 00:54:55 +0000 (0:00:00.390) 0:00:00.889 ******** 2026-01-02 00:58:00.959763 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:58:00.959774 | orchestrator | 2026-01-02 00:58:00.959785 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2026-01-02 00:58:00.959796 | orchestrator | Friday 02 January 2026 00:54:56 +0000 (0:00:00.463) 0:00:01.352 ******** 2026-01-02 00:58:00.959807 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-01-02 00:58:00.959818 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-01-02 00:58:00.959829 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-01-02 00:58:00.959840 | orchestrator | 2026-01-02 00:58:00.959851 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2026-01-02 00:58:00.959862 | orchestrator | Friday 02 January 2026 00:54:56 +0000 (0:00:00.642) 0:00:01.995 ******** 2026-01-02 00:58:00.959878 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-01-02 00:58:00.959935 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-01-02 00:58:00.959963 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-01-02 00:58:00.959979 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-01-02 00:58:00.959994 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-01-02 00:58:00.960021 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-01-02 00:58:00.960034 | orchestrator | 2026-01-02 00:58:00.960046 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-01-02 00:58:00.960057 | orchestrator | Friday 02 January 2026 00:54:58 +0000 (0:00:01.666) 0:00:03.661 ******** 2026-01-02 00:58:00.960069 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:58:00.960080 | orchestrator | 2026-01-02 00:58:00.960093 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2026-01-02 00:58:00.960116 | orchestrator | Friday 02 January 2026 00:54:59 +0000 (0:00:00.493) 0:00:04.155 ******** 2026-01-02 00:58:00.960128 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-01-02 00:58:00.960141 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-01-02 00:58:00.960165 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-01-02 00:58:00.960179 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-01-02 00:58:00.960201 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-01-02 00:58:00.960216 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-01-02 00:58:00.960237 | orchestrator | 2026-01-02 00:58:00.960251 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2026-01-02 00:58:00.960265 | orchestrator | Friday 02 January 2026 00:55:02 +0000 (0:00:03.105) 0:00:07.260 ******** 2026-01-02 00:58:00.960285 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-01-02 00:58:00.960300 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-01-02 00:58:00.960322 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-01-02 00:58:00.960337 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:58:00.960352 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-01-02 00:58:00.960372 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:58:00.960392 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-01-02 00:58:00.960415 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-01-02 00:58:00.960429 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:58:00.960443 | orchestrator | 2026-01-02 00:58:00.960492 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2026-01-02 00:58:00.960508 | orchestrator | Friday 02 January 2026 00:55:04 +0000 (0:00:01.984) 0:00:09.245 ******** 2026-01-02 00:58:00.960522 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-01-02 00:58:00.960543 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-01-02 00:58:00.960563 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-01-02 00:58:00.960578 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:58:00.960598 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-01-02 00:58:00.960610 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:58:00.960622 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-01-02 00:58:00.960647 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-01-02 00:58:00.960659 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:58:00.960671 | orchestrator | 2026-01-02 00:58:00.960682 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2026-01-02 00:58:00.960693 | orchestrator | Friday 02 January 2026 00:55:05 +0000 (0:00:01.061) 0:00:10.306 ******** 2026-01-02 00:58:00.960705 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-01-02 00:58:00.960725 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-01-02 00:58:00.960738 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-01-02 00:58:00.960868 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-01-02 00:58:00.960890 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-01-02 00:58:00.960913 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-01-02 00:58:00.960933 | orchestrator | 2026-01-02 00:58:00.960944 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2026-01-02 00:58:00.960956 | orchestrator | Friday 02 January 2026 00:55:08 +0000 (0:00:02.843) 0:00:13.149 ******** 2026-01-02 00:58:00.960968 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:58:00.960979 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:58:00.960990 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:58:00.961001 | orchestrator | 2026-01-02 00:58:00.961012 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2026-01-02 00:58:00.961023 | orchestrator | Friday 02 January 2026 00:55:10 +0000 (0:00:02.259) 0:00:15.409 ******** 2026-01-02 00:58:00.961034 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:58:00.961045 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:58:00.961057 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:58:00.961067 | orchestrator | 2026-01-02 00:58:00.961078 | orchestrator | TASK [service-check-containers : opensearch | Check containers] **************** 2026-01-02 00:58:00.961089 | orchestrator | Friday 02 January 2026 00:55:12 +0000 (0:00:02.073) 0:00:17.483 ******** 2026-01-02 00:58:00.961101 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-01-02 00:58:00.961118 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-01-02 00:58:00.961130 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-01-02 00:58:00.961156 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-01-02 00:58:00.961170 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-01-02 00:58:00.961187 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-01-02 00:58:00.961199 | orchestrator | 2026-01-02 00:58:00.961210 | orchestrator | TASK [service-check-containers : opensearch | Notify handlers to restart containers] *** 2026-01-02 00:58:00.961222 | orchestrator | Friday 02 January 2026 00:55:14 +0000 (0:00:02.066) 0:00:19.549 ******** 2026-01-02 00:58:00.961233 | orchestrator | changed: [testbed-node-0] => { 2026-01-02 00:58:00.961244 | orchestrator |  "msg": "Notifying handlers" 2026-01-02 00:58:00.961255 | orchestrator | } 2026-01-02 00:58:00.961266 | orchestrator | changed: [testbed-node-1] => { 2026-01-02 00:58:00.961283 | orchestrator |  "msg": "Notifying handlers" 2026-01-02 00:58:00.961295 | orchestrator | } 2026-01-02 00:58:00.961306 | orchestrator | changed: [testbed-node-2] => { 2026-01-02 00:58:00.961316 | orchestrator |  "msg": "Notifying handlers" 2026-01-02 00:58:00.961327 | orchestrator | } 2026-01-02 00:58:00.961338 | orchestrator | 2026-01-02 00:58:00.961349 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-01-02 00:58:00.961366 | orchestrator | Friday 02 January 2026 00:55:14 +0000 (0:00:00.338) 0:00:19.888 ******** 2026-01-02 00:58:00.961378 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-01-02 00:58:00.961390 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-01-02 00:58:00.961402 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:58:00.961419 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-01-02 00:58:00.961437 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-01-02 00:58:00.961456 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:58:00.961547 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-01-02 00:58:00.961568 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-01-02 00:58:00.961588 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:58:00.961601 | orchestrator | 2026-01-02 00:58:00.961613 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-01-02 00:58:00.961627 | orchestrator | Friday 02 January 2026 00:55:16 +0000 (0:00:01.535) 0:00:21.423 ******** 2026-01-02 00:58:00.961639 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:58:00.961652 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:58:00.961666 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:58:00.961678 | orchestrator | 2026-01-02 00:58:00.961698 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-01-02 00:58:00.961709 | orchestrator | Friday 02 January 2026 00:55:16 +0000 (0:00:00.347) 0:00:21.771 ******** 2026-01-02 00:58:00.961720 | orchestrator | 2026-01-02 00:58:00.961731 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-01-02 00:58:00.961742 | orchestrator | Friday 02 January 2026 00:55:16 +0000 (0:00:00.065) 0:00:21.837 ******** 2026-01-02 00:58:00.961752 | orchestrator | 2026-01-02 00:58:00.961763 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-01-02 00:58:00.961782 | orchestrator | Friday 02 January 2026 00:55:16 +0000 (0:00:00.068) 0:00:21.905 ******** 2026-01-02 00:58:00.961793 | orchestrator | 2026-01-02 00:58:00.961804 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2026-01-02 00:58:00.961814 | orchestrator | Friday 02 January 2026 00:55:16 +0000 (0:00:00.070) 0:00:21.976 ******** 2026-01-02 00:58:00.961825 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:58:00.961836 | orchestrator | 2026-01-02 00:58:00.961847 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2026-01-02 00:58:00.961857 | orchestrator | Friday 02 January 2026 00:55:17 +0000 (0:00:00.311) 0:00:22.287 ******** 2026-01-02 00:58:00.961868 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:58:00.961879 | orchestrator | 2026-01-02 00:58:00.961890 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2026-01-02 00:58:00.961901 | orchestrator | Friday 02 January 2026 00:55:17 +0000 (0:00:00.219) 0:00:22.506 ******** 2026-01-02 00:58:00.961912 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:58:00.961922 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:58:00.961933 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:58:00.961944 | orchestrator | 2026-01-02 00:58:00.961955 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2026-01-02 00:58:00.961966 | orchestrator | Friday 02 January 2026 00:56:24 +0000 (0:01:06.820) 0:01:29.327 ******** 2026-01-02 00:58:00.961977 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:58:00.961987 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:58:00.961998 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:58:00.962009 | orchestrator | 2026-01-02 00:58:00.962079 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-01-02 00:58:00.962092 | orchestrator | Friday 02 January 2026 00:57:45 +0000 (0:01:21.301) 0:02:50.629 ******** 2026-01-02 00:58:00.962112 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:58:00.962123 | orchestrator | 2026-01-02 00:58:00.962134 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2026-01-02 00:58:00.962146 | orchestrator | Friday 02 January 2026 00:57:46 +0000 (0:00:00.520) 0:02:51.149 ******** 2026-01-02 00:58:00.962157 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:58:00.962168 | orchestrator | 2026-01-02 00:58:00.962178 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2026-01-02 00:58:00.962189 | orchestrator | Friday 02 January 2026 00:57:48 +0000 (0:00:02.756) 0:02:53.905 ******** 2026-01-02 00:58:00.962200 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:58:00.962211 | orchestrator | 2026-01-02 00:58:00.962222 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2026-01-02 00:58:00.962233 | orchestrator | Friday 02 January 2026 00:57:51 +0000 (0:00:02.726) 0:02:56.632 ******** 2026-01-02 00:58:00.962244 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:58:00.962255 | orchestrator | 2026-01-02 00:58:00.962266 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2026-01-02 00:58:00.962277 | orchestrator | Friday 02 January 2026 00:57:55 +0000 (0:00:03.636) 0:03:00.268 ******** 2026-01-02 00:58:00.962288 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:58:00.962298 | orchestrator | 2026-01-02 00:58:00.962309 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-02 00:58:00.962322 | orchestrator | testbed-node-0 : ok=19  changed=12  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-02 00:58:00.962333 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-01-02 00:58:00.962344 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-01-02 00:58:00.962356 | orchestrator | 2026-01-02 00:58:00.962367 | orchestrator | 2026-01-02 00:58:00.962377 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-02 00:58:00.962396 | orchestrator | Friday 02 January 2026 00:57:57 +0000 (0:00:02.840) 0:03:03.109 ******** 2026-01-02 00:58:00.962407 | orchestrator | =============================================================================== 2026-01-02 00:58:00.962417 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 81.30s 2026-01-02 00:58:00.962428 | orchestrator | opensearch : Restart opensearch container ------------------------------ 66.82s 2026-01-02 00:58:00.962439 | orchestrator | opensearch : Create new log retention policy ---------------------------- 3.64s 2026-01-02 00:58:00.962450 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 3.11s 2026-01-02 00:58:00.962523 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.84s 2026-01-02 00:58:00.962536 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.84s 2026-01-02 00:58:00.962547 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.76s 2026-01-02 00:58:00.962558 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.73s 2026-01-02 00:58:00.962568 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 2.26s 2026-01-02 00:58:00.962579 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 2.07s 2026-01-02 00:58:00.962595 | orchestrator | service-check-containers : opensearch | Check containers ---------------- 2.07s 2026-01-02 00:58:00.962606 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 1.98s 2026-01-02 00:58:00.962617 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.67s 2026-01-02 00:58:00.962628 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.54s 2026-01-02 00:58:00.962639 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 1.06s 2026-01-02 00:58:00.962649 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 0.64s 2026-01-02 00:58:00.962660 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.52s 2026-01-02 00:58:00.962671 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.49s 2026-01-02 00:58:00.962682 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.46s 2026-01-02 00:58:00.962693 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.39s 2026-01-02 00:58:00.962704 | orchestrator | 2026-01-02 00:58:00 | INFO  | Task 88215152-d7bf-465a-8a27-620f62a6487c is in state STARTED 2026-01-02 00:58:00.962714 | orchestrator | 2026-01-02 00:58:00 | INFO  | Task 38e35a80-1ec0-4718-935c-1f1c6e56c7a5 is in state STARTED 2026-01-02 00:58:00.962724 | orchestrator | 2026-01-02 00:58:00 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:58:04.017569 | orchestrator | 2026-01-02 00:58:04 | INFO  | Task 88215152-d7bf-465a-8a27-620f62a6487c is in state STARTED 2026-01-02 00:58:04.018236 | orchestrator | 2026-01-02 00:58:04 | INFO  | Task 38e35a80-1ec0-4718-935c-1f1c6e56c7a5 is in state STARTED 2026-01-02 00:58:04.018265 | orchestrator | 2026-01-02 00:58:04 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:58:07.067355 | orchestrator | 2026-01-02 00:58:07 | INFO  | Task 88215152-d7bf-465a-8a27-620f62a6487c is in state STARTED 2026-01-02 00:58:07.068309 | orchestrator | 2026-01-02 00:58:07 | INFO  | Task 38e35a80-1ec0-4718-935c-1f1c6e56c7a5 is in state STARTED 2026-01-02 00:58:07.068344 | orchestrator | 2026-01-02 00:58:07 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:58:10.115920 | orchestrator | 2026-01-02 00:58:10 | INFO  | Task 88215152-d7bf-465a-8a27-620f62a6487c is in state STARTED 2026-01-02 00:58:10.118207 | orchestrator | 2026-01-02 00:58:10 | INFO  | Task 38e35a80-1ec0-4718-935c-1f1c6e56c7a5 is in state SUCCESS 2026-01-02 00:58:10.119799 | orchestrator | 2026-01-02 00:58:10.119858 | orchestrator | 2026-01-02 00:58:10.119875 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2026-01-02 00:58:10.119892 | orchestrator | 2026-01-02 00:58:10.119907 | orchestrator | TASK [Inform the user about the following task] ******************************** 2026-01-02 00:58:10.120028 | orchestrator | Friday 02 January 2026 00:54:54 +0000 (0:00:00.065) 0:00:00.065 ******** 2026-01-02 00:58:10.120051 | orchestrator | ok: [localhost] => { 2026-01-02 00:58:10.120066 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2026-01-02 00:58:10.120079 | orchestrator | } 2026-01-02 00:58:10.120087 | orchestrator | 2026-01-02 00:58:10.120095 | orchestrator | TASK [Check MariaDB service] *************************************************** 2026-01-02 00:58:10.120104 | orchestrator | Friday 02 January 2026 00:54:54 +0000 (0:00:00.043) 0:00:00.109 ******** 2026-01-02 00:58:10.120112 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2026-01-02 00:58:10.120122 | orchestrator | ...ignoring 2026-01-02 00:58:10.120130 | orchestrator | 2026-01-02 00:58:10.120138 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2026-01-02 00:58:10.120146 | orchestrator | Friday 02 January 2026 00:54:57 +0000 (0:00:02.709) 0:00:02.818 ******** 2026-01-02 00:58:10.120154 | orchestrator | skipping: [localhost] 2026-01-02 00:58:10.120166 | orchestrator | 2026-01-02 00:58:10.120180 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2026-01-02 00:58:10.120193 | orchestrator | Friday 02 January 2026 00:54:57 +0000 (0:00:00.049) 0:00:02.868 ******** 2026-01-02 00:58:10.120206 | orchestrator | ok: [localhost] 2026-01-02 00:58:10.120219 | orchestrator | 2026-01-02 00:58:10.120661 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-02 00:58:10.120692 | orchestrator | 2026-01-02 00:58:10.120705 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-02 00:58:10.120720 | orchestrator | Friday 02 January 2026 00:54:57 +0000 (0:00:00.132) 0:00:03.000 ******** 2026-01-02 00:58:10.120728 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:58:10.120737 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:58:10.120745 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:58:10.120753 | orchestrator | 2026-01-02 00:58:10.120760 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-02 00:58:10.120768 | orchestrator | Friday 02 January 2026 00:54:57 +0000 (0:00:00.286) 0:00:03.286 ******** 2026-01-02 00:58:10.120776 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-01-02 00:58:10.120785 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-01-02 00:58:10.120792 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-01-02 00:58:10.120800 | orchestrator | 2026-01-02 00:58:10.120808 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-01-02 00:58:10.120816 | orchestrator | 2026-01-02 00:58:10.120824 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-01-02 00:58:10.120846 | orchestrator | Friday 02 January 2026 00:54:58 +0000 (0:00:00.484) 0:00:03.771 ******** 2026-01-02 00:58:10.120854 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-01-02 00:58:10.120862 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-01-02 00:58:10.120870 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-01-02 00:58:10.120878 | orchestrator | 2026-01-02 00:58:10.120886 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-01-02 00:58:10.120893 | orchestrator | Friday 02 January 2026 00:54:58 +0000 (0:00:00.338) 0:00:04.109 ******** 2026-01-02 00:58:10.120901 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:58:10.120911 | orchestrator | 2026-01-02 00:58:10.120919 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2026-01-02 00:58:10.120927 | orchestrator | Friday 02 January 2026 00:54:59 +0000 (0:00:00.523) 0:00:04.633 ******** 2026-01-02 00:58:10.121031 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-02 00:58:10.121057 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-02 00:58:10.121067 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-02 00:58:10.121084 | orchestrator | 2026-01-02 00:58:10.121118 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2026-01-02 00:58:10.121128 | orchestrator | Friday 02 January 2026 00:55:02 +0000 (0:00:03.232) 0:00:07.865 ******** 2026-01-02 00:58:10.121137 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:58:10.121145 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:58:10.121153 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:58:10.121161 | orchestrator | 2026-01-02 00:58:10.121169 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2026-01-02 00:58:10.121177 | orchestrator | Friday 02 January 2026 00:55:03 +0000 (0:00:01.091) 0:00:08.957 ******** 2026-01-02 00:58:10.121185 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:58:10.121193 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:58:10.121201 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:58:10.121209 | orchestrator | 2026-01-02 00:58:10.121217 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2026-01-02 00:58:10.121225 | orchestrator | Friday 02 January 2026 00:55:05 +0000 (0:00:01.780) 0:00:10.738 ******** 2026-01-02 00:58:10.121237 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-02 00:58:10.121258 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-02 00:58:10.121270 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-02 00:58:10.121281 | orchestrator | 2026-01-02 00:58:10.121299 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2026-01-02 00:58:10.121309 | orchestrator | Friday 02 January 2026 00:55:08 +0000 (0:00:03.528) 0:00:14.266 ******** 2026-01-02 00:58:10.121318 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:58:10.121327 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:58:10.121336 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:58:10.121346 | orchestrator | 2026-01-02 00:58:10.121354 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2026-01-02 00:58:10.121364 | orchestrator | Friday 02 January 2026 00:55:10 +0000 (0:00:01.208) 0:00:15.475 ******** 2026-01-02 00:58:10.121373 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:58:10.121382 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:58:10.121391 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:58:10.121400 | orchestrator | 2026-01-02 00:58:10.121409 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-01-02 00:58:10.121419 | orchestrator | Friday 02 January 2026 00:55:14 +0000 (0:00:04.007) 0:00:19.483 ******** 2026-01-02 00:58:10.121428 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:58:10.121438 | orchestrator | 2026-01-02 00:58:10.121469 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-01-02 00:58:10.121480 | orchestrator | Friday 02 January 2026 00:55:14 +0000 (0:00:00.565) 0:00:20.049 ******** 2026-01-02 00:58:10.121499 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-02 00:58:10.121515 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-02 00:58:10.121531 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:58:10.121540 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:58:10.121556 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-02 00:58:10.121567 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:58:10.121577 | orchestrator | 2026-01-02 00:58:10.121586 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-01-02 00:58:10.121595 | orchestrator | Friday 02 January 2026 00:55:17 +0000 (0:00:02.629) 0:00:22.678 ******** 2026-01-02 00:58:10.121609 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-02 00:58:10.121626 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:58:10.121640 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-02 00:58:10.121649 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:58:10.121658 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-02 00:58:10.121679 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:58:10.121687 | orchestrator | 2026-01-02 00:58:10.121699 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-01-02 00:58:10.121707 | orchestrator | Friday 02 January 2026 00:55:20 +0000 (0:00:02.772) 0:00:25.450 ******** 2026-01-02 00:58:10.121716 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-02 00:58:10.121725 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:58:10.121740 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-02 00:58:10.121754 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:58:10.121767 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-02 00:58:10.121776 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:58:10.121784 | orchestrator | 2026-01-02 00:58:10.121792 | orchestrator | TASK [service-check-containers : mariadb | Check containers] ******************* 2026-01-02 00:58:10.121800 | orchestrator | Friday 02 January 2026 00:55:22 +0000 (0:00:02.635) 0:00:28.086 ******** 2026-01-02 00:58:10.121815 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-02 00:58:10.121833 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-02 00:58:10.121849 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-02 00:58:10.121864 | orchestrator | 2026-01-02 00:58:10.121872 | orchestrator | TASK [service-check-containers : mariadb | Notify handlers to restart containers] *** 2026-01-02 00:58:10.121880 | orchestrator | Friday 02 January 2026 00:55:25 +0000 (0:00:02.616) 0:00:30.703 ******** 2026-01-02 00:58:10.121888 | orchestrator | changed: [testbed-node-0] => { 2026-01-02 00:58:10.121896 | orchestrator |  "msg": "Notifying handlers" 2026-01-02 00:58:10.121904 | orchestrator | } 2026-01-02 00:58:10.121912 | orchestrator | changed: [testbed-node-1] => { 2026-01-02 00:58:10.121920 | orchestrator |  "msg": "Notifying handlers" 2026-01-02 00:58:10.121928 | orchestrator | } 2026-01-02 00:58:10.121936 | orchestrator | changed: [testbed-node-2] => { 2026-01-02 00:58:10.121944 | orchestrator |  "msg": "Notifying handlers" 2026-01-02 00:58:10.121952 | orchestrator | } 2026-01-02 00:58:10.121960 | orchestrator | 2026-01-02 00:58:10.121968 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-01-02 00:58:10.121976 | orchestrator | Friday 02 January 2026 00:55:25 +0000 (0:00:00.426) 0:00:31.129 ******** 2026-01-02 00:58:10.121988 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-02 00:58:10.121997 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:58:10.122011 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-02 00:58:10.122069 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:58:10.122083 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-02 00:58:10.122092 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:58:10.122100 | orchestrator | 2026-01-02 00:58:10.122108 | orchestrator | TASK [mariadb : Checking for mariadb cluster] ********************************** 2026-01-02 00:58:10.122116 | orchestrator | Friday 02 January 2026 00:55:27 +0000 (0:00:02.192) 0:00:33.322 ******** 2026-01-02 00:58:10.122124 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:58:10.122132 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:58:10.122140 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:58:10.122148 | orchestrator | 2026-01-02 00:58:10.122156 | orchestrator | TASK [mariadb : Cleaning up temp file on localhost] **************************** 2026-01-02 00:58:10.122164 | orchestrator | Friday 02 January 2026 00:55:28 +0000 (0:00:00.367) 0:00:33.689 ******** 2026-01-02 00:58:10.122172 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:58:10.122180 | orchestrator | 2026-01-02 00:58:10.122188 | orchestrator | TASK [mariadb : Stop MariaDB containers] *************************************** 2026-01-02 00:58:10.122196 | orchestrator | Friday 02 January 2026 00:55:28 +0000 (0:00:00.120) 0:00:33.810 ******** 2026-01-02 00:58:10.122209 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:58:10.122217 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:58:10.122224 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:58:10.122232 | orchestrator | 2026-01-02 00:58:10.122240 | orchestrator | TASK [mariadb : Run MariaDB wsrep recovery] ************************************ 2026-01-02 00:58:10.122248 | orchestrator | Friday 02 January 2026 00:55:28 +0000 (0:00:00.392) 0:00:34.203 ******** 2026-01-02 00:58:10.122262 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:58:10.122270 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:58:10.122278 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:58:10.122286 | orchestrator | 2026-01-02 00:58:10.122294 | orchestrator | TASK [mariadb : Copying MariaDB log file to /tmp] ****************************** 2026-01-02 00:58:10.122302 | orchestrator | Friday 02 January 2026 00:55:29 +0000 (0:00:00.277) 0:00:34.480 ******** 2026-01-02 00:58:10.122310 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:58:10.122318 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:58:10.122325 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:58:10.122333 | orchestrator | 2026-01-02 00:58:10.122341 | orchestrator | TASK [mariadb : Get MariaDB wsrep recovery seqno] ****************************** 2026-01-02 00:58:10.122349 | orchestrator | Friday 02 January 2026 00:55:29 +0000 (0:00:00.281) 0:00:34.762 ******** 2026-01-02 00:58:10.122357 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:58:10.122365 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:58:10.122373 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:58:10.122381 | orchestrator | 2026-01-02 00:58:10.122389 | orchestrator | TASK [mariadb : Removing MariaDB log file from /tmp] *************************** 2026-01-02 00:58:10.122396 | orchestrator | Friday 02 January 2026 00:55:29 +0000 (0:00:00.295) 0:00:35.058 ******** 2026-01-02 00:58:10.122404 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:58:10.122412 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:58:10.122420 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:58:10.122428 | orchestrator | 2026-01-02 00:58:10.122436 | orchestrator | TASK [mariadb : Registering MariaDB seqno variable] **************************** 2026-01-02 00:58:10.122444 | orchestrator | Friday 02 January 2026 00:55:30 +0000 (0:00:00.448) 0:00:35.506 ******** 2026-01-02 00:58:10.122468 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:58:10.122476 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:58:10.122484 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:58:10.122492 | orchestrator | 2026-01-02 00:58:10.122500 | orchestrator | TASK [mariadb : Comparing seqno value on all mariadb hosts] ******************** 2026-01-02 00:58:10.122508 | orchestrator | Friday 02 January 2026 00:55:30 +0000 (0:00:00.313) 0:00:35.819 ******** 2026-01-02 00:58:10.122516 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-01-02 00:58:10.122524 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-01-02 00:58:10.122531 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-01-02 00:58:10.122539 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:58:10.122547 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-01-02 00:58:10.122555 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-01-02 00:58:10.122563 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-01-02 00:58:10.122571 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:58:10.122579 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-01-02 00:58:10.122586 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-01-02 00:58:10.122594 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-01-02 00:58:10.122602 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:58:10.122610 | orchestrator | 2026-01-02 00:58:10.122622 | orchestrator | TASK [mariadb : Writing hostname of host with the largest seqno to temp file] *** 2026-01-02 00:58:10.122630 | orchestrator | Friday 02 January 2026 00:55:30 +0000 (0:00:00.349) 0:00:36.168 ******** 2026-01-02 00:58:10.122638 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:58:10.122651 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:58:10.122659 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:58:10.122667 | orchestrator | 2026-01-02 00:58:10.122675 | orchestrator | TASK [mariadb : Registering mariadb_recover_inventory_name from temp file] ***** 2026-01-02 00:58:10.122683 | orchestrator | Friday 02 January 2026 00:55:31 +0000 (0:00:00.314) 0:00:36.483 ******** 2026-01-02 00:58:10.122691 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:58:10.122699 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:58:10.122706 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:58:10.122714 | orchestrator | 2026-01-02 00:58:10.122722 | orchestrator | TASK [mariadb : Store bootstrap and master hostnames into facts] *************** 2026-01-02 00:58:10.122730 | orchestrator | Friday 02 January 2026 00:55:31 +0000 (0:00:00.579) 0:00:37.062 ******** 2026-01-02 00:58:10.122738 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:58:10.122746 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:58:10.122754 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:58:10.122762 | orchestrator | 2026-01-02 00:58:10.122770 | orchestrator | TASK [mariadb : Set grastate.dat file from MariaDB container in bootstrap host] *** 2026-01-02 00:58:10.122778 | orchestrator | Friday 02 January 2026 00:55:32 +0000 (0:00:00.364) 0:00:37.426 ******** 2026-01-02 00:58:10.122785 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:58:10.122793 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:58:10.122801 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:58:10.122809 | orchestrator | 2026-01-02 00:58:10.122817 | orchestrator | TASK [mariadb : Starting first MariaDB container] ****************************** 2026-01-02 00:58:10.122825 | orchestrator | Friday 02 January 2026 00:55:32 +0000 (0:00:00.421) 0:00:37.848 ******** 2026-01-02 00:58:10.122833 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:58:10.122841 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:58:10.122848 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:58:10.122856 | orchestrator | 2026-01-02 00:58:10.122864 | orchestrator | TASK [mariadb : Wait for first MariaDB container] ****************************** 2026-01-02 00:58:10.122872 | orchestrator | Friday 02 January 2026 00:55:32 +0000 (0:00:00.337) 0:00:38.186 ******** 2026-01-02 00:58:10.122880 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:58:10.122888 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:58:10.122896 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:58:10.122904 | orchestrator | 2026-01-02 00:58:10.122912 | orchestrator | TASK [mariadb : Set first MariaDB container as primary] ************************ 2026-01-02 00:58:10.122920 | orchestrator | Friday 02 January 2026 00:55:33 +0000 (0:00:00.634) 0:00:38.820 ******** 2026-01-02 00:58:10.122928 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:58:10.122935 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:58:10.122943 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:58:10.122951 | orchestrator | 2026-01-02 00:58:10.122959 | orchestrator | TASK [mariadb : Wait for MariaDB to become operational] ************************ 2026-01-02 00:58:10.122974 | orchestrator | Friday 02 January 2026 00:55:33 +0000 (0:00:00.336) 0:00:39.157 ******** 2026-01-02 00:58:10.122987 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:58:10.123000 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:58:10.123013 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:58:10.123026 | orchestrator | 2026-01-02 00:58:10.123039 | orchestrator | TASK [mariadb : Restart slave MariaDB container(s)] **************************** 2026-01-02 00:58:10.123051 | orchestrator | Friday 02 January 2026 00:55:34 +0000 (0:00:00.322) 0:00:39.479 ******** 2026-01-02 00:58:10.123066 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-02 00:58:10.123082 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:58:10.123091 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-02 00:58:10.123100 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:58:10.123118 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-02 00:58:10.123145 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:58:10.123157 | orchestrator | 2026-01-02 00:58:10.123165 | orchestrator | TASK [mariadb : Wait for slave MariaDB] **************************************** 2026-01-02 00:58:10.123173 | orchestrator | Friday 02 January 2026 00:55:36 +0000 (0:00:02.382) 0:00:41.862 ******** 2026-01-02 00:58:10.123181 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:58:10.123189 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:58:10.123201 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:58:10.123209 | orchestrator | 2026-01-02 00:58:10.123217 | orchestrator | TASK [mariadb : Restart master MariaDB container(s)] *************************** 2026-01-02 00:58:10.123225 | orchestrator | Friday 02 January 2026 00:55:36 +0000 (0:00:00.327) 0:00:42.189 ******** 2026-01-02 00:58:10.123233 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-02 00:58:10.123247 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:58:10.123257 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-02 00:58:10.123272 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:58:10.123285 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-02 00:58:10.123294 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:58:10.123301 | orchestrator | 2026-01-02 00:58:10.123309 | orchestrator | TASK [mariadb : Wait for master mariadb] *************************************** 2026-01-02 00:58:10.123317 | orchestrator | Friday 02 January 2026 00:55:39 +0000 (0:00:02.211) 0:00:44.400 ******** 2026-01-02 00:58:10.123325 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:58:10.123333 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:58:10.123341 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:58:10.123349 | orchestrator | 2026-01-02 00:58:10.123357 | orchestrator | TASK [service-check : mariadb | Get container facts] *************************** 2026-01-02 00:58:10.123369 | orchestrator | Friday 02 January 2026 00:55:39 +0000 (0:00:00.322) 0:00:44.723 ******** 2026-01-02 00:58:10.123382 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:58:10.123390 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:58:10.123398 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:58:10.123406 | orchestrator | 2026-01-02 00:58:10.123414 | orchestrator | TASK [service-check : mariadb | Fail if containers are missing or not running] *** 2026-01-02 00:58:10.123422 | orchestrator | Friday 02 January 2026 00:55:39 +0000 (0:00:00.307) 0:00:45.031 ******** 2026-01-02 00:58:10.123430 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:58:10.123438 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:58:10.123445 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:58:10.123521 | orchestrator | 2026-01-02 00:58:10.123536 | orchestrator | TASK [service-check : mariadb | Fail if containers are unhealthy] ************** 2026-01-02 00:58:10.123549 | orchestrator | Friday 02 January 2026 00:55:39 +0000 (0:00:00.313) 0:00:45.344 ******** 2026-01-02 00:58:10.123558 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:58:10.123566 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:58:10.123574 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:58:10.123582 | orchestrator | 2026-01-02 00:58:10.123590 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-01-02 00:58:10.123598 | orchestrator | Friday 02 January 2026 00:55:40 +0000 (0:00:00.758) 0:00:46.103 ******** 2026-01-02 00:58:10.123606 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:58:10.123613 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:58:10.123621 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:58:10.123629 | orchestrator | 2026-01-02 00:58:10.123637 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2026-01-02 00:58:10.123644 | orchestrator | Friday 02 January 2026 00:55:41 +0000 (0:00:00.297) 0:00:46.401 ******** 2026-01-02 00:58:10.123652 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:58:10.123660 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:58:10.123668 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:58:10.123676 | orchestrator | 2026-01-02 00:58:10.123683 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2026-01-02 00:58:10.123691 | orchestrator | Friday 02 January 2026 00:55:41 +0000 (0:00:00.922) 0:00:47.323 ******** 2026-01-02 00:58:10.123699 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:58:10.123707 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:58:10.123715 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:58:10.123723 | orchestrator | 2026-01-02 00:58:10.123731 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2026-01-02 00:58:10.123739 | orchestrator | Friday 02 January 2026 00:55:42 +0000 (0:00:00.522) 0:00:47.845 ******** 2026-01-02 00:58:10.123747 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:58:10.123755 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:58:10.123763 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:58:10.123770 | orchestrator | 2026-01-02 00:58:10.123778 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2026-01-02 00:58:10.123786 | orchestrator | Friday 02 January 2026 00:55:42 +0000 (0:00:00.320) 0:00:48.166 ******** 2026-01-02 00:58:10.123800 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2026-01-02 00:58:10.123809 | orchestrator | ...ignoring 2026-01-02 00:58:10.123817 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2026-01-02 00:58:10.123825 | orchestrator | ...ignoring 2026-01-02 00:58:10.123833 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2026-01-02 00:58:10.123841 | orchestrator | ...ignoring 2026-01-02 00:58:10.123849 | orchestrator | 2026-01-02 00:58:10.123857 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2026-01-02 00:58:10.123865 | orchestrator | Friday 02 January 2026 00:55:53 +0000 (0:00:10.738) 0:00:58.904 ******** 2026-01-02 00:58:10.123879 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:58:10.123887 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:58:10.123895 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:58:10.123903 | orchestrator | 2026-01-02 00:58:10.123911 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2026-01-02 00:58:10.123919 | orchestrator | Friday 02 January 2026 00:55:53 +0000 (0:00:00.328) 0:00:59.232 ******** 2026-01-02 00:58:10.123927 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:58:10.123935 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:58:10.123943 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:58:10.123951 | orchestrator | 2026-01-02 00:58:10.123959 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2026-01-02 00:58:10.123967 | orchestrator | Friday 02 January 2026 00:55:54 +0000 (0:00:00.540) 0:00:59.773 ******** 2026-01-02 00:58:10.123975 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:58:10.123983 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:58:10.123990 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:58:10.123998 | orchestrator | 2026-01-02 00:58:10.124006 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2026-01-02 00:58:10.124014 | orchestrator | Friday 02 January 2026 00:55:54 +0000 (0:00:00.346) 0:01:00.119 ******** 2026-01-02 00:58:10.124022 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:58:10.124030 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:58:10.124038 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:58:10.124046 | orchestrator | 2026-01-02 00:58:10.124054 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2026-01-02 00:58:10.124062 | orchestrator | Friday 02 January 2026 00:55:55 +0000 (0:00:00.312) 0:01:00.432 ******** 2026-01-02 00:58:10.124070 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:58:10.124078 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:58:10.124086 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:58:10.124094 | orchestrator | 2026-01-02 00:58:10.124102 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2026-01-02 00:58:10.124110 | orchestrator | Friday 02 January 2026 00:55:55 +0000 (0:00:00.309) 0:01:00.742 ******** 2026-01-02 00:58:10.124118 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:58:10.124132 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:58:10.124140 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:58:10.124148 | orchestrator | 2026-01-02 00:58:10.124156 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-01-02 00:58:10.124164 | orchestrator | Friday 02 January 2026 00:55:55 +0000 (0:00:00.608) 0:01:01.350 ******** 2026-01-02 00:58:10.124172 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:58:10.124180 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:58:10.124188 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2026-01-02 00:58:10.124196 | orchestrator | 2026-01-02 00:58:10.124204 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2026-01-02 00:58:10.124212 | orchestrator | Friday 02 January 2026 00:55:56 +0000 (0:00:00.382) 0:01:01.733 ******** 2026-01-02 00:58:10.124220 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:58:10.124228 | orchestrator | 2026-01-02 00:58:10.124236 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2026-01-02 00:58:10.124244 | orchestrator | Friday 02 January 2026 00:56:06 +0000 (0:00:10.608) 0:01:12.342 ******** 2026-01-02 00:58:10.124252 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:58:10.124259 | orchestrator | 2026-01-02 00:58:10.124267 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-01-02 00:58:10.124275 | orchestrator | Friday 02 January 2026 00:56:07 +0000 (0:00:00.140) 0:01:12.482 ******** 2026-01-02 00:58:10.124283 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:58:10.124291 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:58:10.124299 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:58:10.124307 | orchestrator | 2026-01-02 00:58:10.124320 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2026-01-02 00:58:10.124328 | orchestrator | Friday 02 January 2026 00:56:08 +0000 (0:00:00.940) 0:01:13.423 ******** 2026-01-02 00:58:10.124336 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:58:10.124344 | orchestrator | 2026-01-02 00:58:10.124352 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2026-01-02 00:58:10.124360 | orchestrator | Friday 02 January 2026 00:56:14 +0000 (0:00:06.539) 0:01:19.963 ******** 2026-01-02 00:58:10.124368 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:58:10.124376 | orchestrator | 2026-01-02 00:58:10.124384 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2026-01-02 00:58:10.124391 | orchestrator | Friday 02 January 2026 00:56:16 +0000 (0:00:01.560) 0:01:21.523 ******** 2026-01-02 00:58:10.124399 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:58:10.124407 | orchestrator | 2026-01-02 00:58:10.124415 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2026-01-02 00:58:10.124423 | orchestrator | Friday 02 January 2026 00:56:18 +0000 (0:00:02.289) 0:01:23.813 ******** 2026-01-02 00:58:10.124431 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:58:10.124439 | orchestrator | 2026-01-02 00:58:10.124465 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2026-01-02 00:58:10.124475 | orchestrator | Friday 02 January 2026 00:56:18 +0000 (0:00:00.120) 0:01:23.934 ******** 2026-01-02 00:58:10.124483 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:58:10.124495 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:58:10.124504 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:58:10.124511 | orchestrator | 2026-01-02 00:58:10.124520 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2026-01-02 00:58:10.124527 | orchestrator | Friday 02 January 2026 00:56:18 +0000 (0:00:00.313) 0:01:24.247 ******** 2026-01-02 00:58:10.124535 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:58:10.124543 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-01-02 00:58:10.124551 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:58:10.124559 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:58:10.124567 | orchestrator | 2026-01-02 00:58:10.124575 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-01-02 00:58:10.124583 | orchestrator | skipping: no hosts matched 2026-01-02 00:58:10.124590 | orchestrator | 2026-01-02 00:58:10.124598 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-01-02 00:58:10.124606 | orchestrator | 2026-01-02 00:58:10.124614 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-01-02 00:58:10.124622 | orchestrator | Friday 02 January 2026 00:56:19 +0000 (0:00:00.537) 0:01:24.785 ******** 2026-01-02 00:58:10.124629 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:58:10.124637 | orchestrator | 2026-01-02 00:58:10.124645 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-01-02 00:58:10.124653 | orchestrator | Friday 02 January 2026 00:56:35 +0000 (0:00:16.558) 0:01:41.343 ******** 2026-01-02 00:58:10.124661 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:58:10.124669 | orchestrator | 2026-01-02 00:58:10.124676 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-01-02 00:58:10.124684 | orchestrator | Friday 02 January 2026 00:56:51 +0000 (0:00:15.585) 0:01:56.929 ******** 2026-01-02 00:58:10.124692 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:58:10.124700 | orchestrator | 2026-01-02 00:58:10.124708 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-01-02 00:58:10.124716 | orchestrator | 2026-01-02 00:58:10.124723 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-01-02 00:58:10.124731 | orchestrator | Friday 02 January 2026 00:56:53 +0000 (0:00:01.941) 0:01:58.870 ******** 2026-01-02 00:58:10.124739 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:58:10.124747 | orchestrator | 2026-01-02 00:58:10.124755 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-01-02 00:58:10.124768 | orchestrator | Friday 02 January 2026 00:57:10 +0000 (0:00:17.502) 0:02:16.372 ******** 2026-01-02 00:58:10.124776 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:58:10.124784 | orchestrator | 2026-01-02 00:58:10.124792 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-01-02 00:58:10.124799 | orchestrator | Friday 02 January 2026 00:57:27 +0000 (0:00:16.572) 0:02:32.944 ******** 2026-01-02 00:58:10.124807 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:58:10.124815 | orchestrator | 2026-01-02 00:58:10.124823 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-01-02 00:58:10.124831 | orchestrator | 2026-01-02 00:58:10.124843 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-01-02 00:58:10.124851 | orchestrator | Friday 02 January 2026 00:57:29 +0000 (0:00:02.206) 0:02:35.150 ******** 2026-01-02 00:58:10.124860 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:58:10.124867 | orchestrator | 2026-01-02 00:58:10.124875 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-01-02 00:58:10.124883 | orchestrator | Friday 02 January 2026 00:57:41 +0000 (0:00:11.434) 0:02:46.585 ******** 2026-01-02 00:58:10.124891 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:58:10.124899 | orchestrator | 2026-01-02 00:58:10.124907 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-01-02 00:58:10.124915 | orchestrator | Friday 02 January 2026 00:57:45 +0000 (0:00:04.583) 0:02:51.169 ******** 2026-01-02 00:58:10.124923 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:58:10.124931 | orchestrator | 2026-01-02 00:58:10.124939 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-01-02 00:58:10.124947 | orchestrator | 2026-01-02 00:58:10.124955 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-01-02 00:58:10.124963 | orchestrator | Friday 02 January 2026 00:57:48 +0000 (0:00:02.864) 0:02:54.033 ******** 2026-01-02 00:58:10.124971 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:58:10.124979 | orchestrator | 2026-01-02 00:58:10.124987 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2026-01-02 00:58:10.124994 | orchestrator | Friday 02 January 2026 00:57:49 +0000 (0:00:00.516) 0:02:54.550 ******** 2026-01-02 00:58:10.125002 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:58:10.125010 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:58:10.125018 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:58:10.125026 | orchestrator | 2026-01-02 00:58:10.125034 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2026-01-02 00:58:10.125042 | orchestrator | Friday 02 January 2026 00:57:51 +0000 (0:00:02.560) 0:02:57.110 ******** 2026-01-02 00:58:10.125050 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:58:10.125058 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:58:10.125066 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:58:10.125073 | orchestrator | 2026-01-02 00:58:10.125081 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2026-01-02 00:58:10.125089 | orchestrator | Friday 02 January 2026 00:57:54 +0000 (0:00:02.537) 0:02:59.648 ******** 2026-01-02 00:58:10.125097 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:58:10.125105 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:58:10.125113 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:58:10.125121 | orchestrator | 2026-01-02 00:58:10.125129 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2026-01-02 00:58:10.125137 | orchestrator | Friday 02 January 2026 00:57:56 +0000 (0:00:02.395) 0:03:02.044 ******** 2026-01-02 00:58:10.125145 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:58:10.125153 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:58:10.125161 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:58:10.125169 | orchestrator | 2026-01-02 00:58:10.125177 | orchestrator | TASK [service-check : mariadb | Get container facts] *************************** 2026-01-02 00:58:10.125189 | orchestrator | Friday 02 January 2026 00:57:58 +0000 (0:00:02.301) 0:03:04.345 ******** 2026-01-02 00:58:10.125203 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:58:10.125211 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:58:10.125219 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:58:10.125227 | orchestrator | 2026-01-02 00:58:10.125235 | orchestrator | TASK [service-check : mariadb | Fail if containers are missing or not running] *** 2026-01-02 00:58:10.125243 | orchestrator | Friday 02 January 2026 00:58:03 +0000 (0:00:04.462) 0:03:08.808 ******** 2026-01-02 00:58:10.125251 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:58:10.125259 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:58:10.125266 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:58:10.125274 | orchestrator | 2026-01-02 00:58:10.125282 | orchestrator | TASK [service-check : mariadb | Fail if containers are unhealthy] ************** 2026-01-02 00:58:10.125290 | orchestrator | Friday 02 January 2026 00:58:05 +0000 (0:00:02.374) 0:03:11.183 ******** 2026-01-02 00:58:10.125298 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:58:10.125306 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:58:10.125314 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:58:10.125322 | orchestrator | 2026-01-02 00:58:10.125330 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-01-02 00:58:10.125338 | orchestrator | Friday 02 January 2026 00:58:06 +0000 (0:00:00.518) 0:03:11.702 ******** 2026-01-02 00:58:10.125346 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:58:10.125354 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:58:10.125361 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:58:10.125369 | orchestrator | 2026-01-02 00:58:10.125377 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-01-02 00:58:10.125385 | orchestrator | Friday 02 January 2026 00:58:09 +0000 (0:00:02.876) 0:03:14.578 ******** 2026-01-02 00:58:10.125393 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:58:10.125401 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:58:10.125409 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:58:10.125417 | orchestrator | 2026-01-02 00:58:10.125425 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-02 00:58:10.125433 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-01-02 00:58:10.125441 | orchestrator | testbed-node-0 : ok=36  changed=17  unreachable=0 failed=0 skipped=39  rescued=0 ignored=1  2026-01-02 00:58:10.125470 | orchestrator | testbed-node-1 : ok=22  changed=8  unreachable=0 failed=0 skipped=45  rescued=0 ignored=1  2026-01-02 00:58:10.125479 | orchestrator | testbed-node-2 : ok=22  changed=8  unreachable=0 failed=0 skipped=45  rescued=0 ignored=1  2026-01-02 00:58:10.125487 | orchestrator | 2026-01-02 00:58:10.125495 | orchestrator | 2026-01-02 00:58:10.125507 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-02 00:58:10.125515 | orchestrator | Friday 02 January 2026 00:58:09 +0000 (0:00:00.460) 0:03:15.038 ******** 2026-01-02 00:58:10.125523 | orchestrator | =============================================================================== 2026-01-02 00:58:10.125531 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 34.06s 2026-01-02 00:58:10.125539 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 32.16s 2026-01-02 00:58:10.125547 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 11.44s 2026-01-02 00:58:10.125555 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 10.74s 2026-01-02 00:58:10.125562 | orchestrator | mariadb : Running MariaDB bootstrap container -------------------------- 10.61s 2026-01-02 00:58:10.125570 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 6.54s 2026-01-02 00:58:10.125578 | orchestrator | mariadb : Wait for MariaDB service port liveness ------------------------ 4.58s 2026-01-02 00:58:10.125592 | orchestrator | service-check : mariadb | Get container facts --------------------------- 4.46s 2026-01-02 00:58:10.125600 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 4.15s 2026-01-02 00:58:10.125608 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 4.01s 2026-01-02 00:58:10.125616 | orchestrator | mariadb : Copying over config.json files for services ------------------- 3.53s 2026-01-02 00:58:10.125623 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 3.23s 2026-01-02 00:58:10.125631 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 2.88s 2026-01-02 00:58:10.125639 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 2.86s 2026-01-02 00:58:10.125647 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 2.77s 2026-01-02 00:58:10.125655 | orchestrator | Check MariaDB service --------------------------------------------------- 2.71s 2026-01-02 00:58:10.125663 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 2.64s 2026-01-02 00:58:10.125671 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 2.63s 2026-01-02 00:58:10.125678 | orchestrator | service-check-containers : mariadb | Check containers ------------------- 2.62s 2026-01-02 00:58:10.125686 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 2.56s 2026-01-02 00:58:10.125694 | orchestrator | 2026-01-02 00:58:10 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:58:13.185580 | orchestrator | 2026-01-02 00:58:13 | INFO  | Task e606cee1-ca9a-47fe-bd0e-99f3002516b2 is in state STARTED 2026-01-02 00:58:13.188579 | orchestrator | 2026-01-02 00:58:13 | INFO  | Task 88215152-d7bf-465a-8a27-620f62a6487c is in state STARTED 2026-01-02 00:58:13.190940 | orchestrator | 2026-01-02 00:58:13 | INFO  | Task 53b94dfa-2853-4b9c-bdf4-59ee39731e37 is in state STARTED 2026-01-02 00:58:13.190992 | orchestrator | 2026-01-02 00:58:13 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:58:16.241747 | orchestrator | 2026-01-02 00:58:16 | INFO  | Task e606cee1-ca9a-47fe-bd0e-99f3002516b2 is in state STARTED 2026-01-02 00:58:16.243914 | orchestrator | 2026-01-02 00:58:16 | INFO  | Task 88215152-d7bf-465a-8a27-620f62a6487c is in state STARTED 2026-01-02 00:58:16.245953 | orchestrator | 2026-01-02 00:58:16 | INFO  | Task 53b94dfa-2853-4b9c-bdf4-59ee39731e37 is in state STARTED 2026-01-02 00:58:16.246178 | orchestrator | 2026-01-02 00:58:16 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:58:19.286087 | orchestrator | 2026-01-02 00:58:19 | INFO  | Task e606cee1-ca9a-47fe-bd0e-99f3002516b2 is in state STARTED 2026-01-02 00:58:19.286675 | orchestrator | 2026-01-02 00:58:19 | INFO  | Task 88215152-d7bf-465a-8a27-620f62a6487c is in state STARTED 2026-01-02 00:58:19.287316 | orchestrator | 2026-01-02 00:58:19 | INFO  | Task 53b94dfa-2853-4b9c-bdf4-59ee39731e37 is in state STARTED 2026-01-02 00:58:19.287343 | orchestrator | 2026-01-02 00:58:19 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:58:22.331997 | orchestrator | 2026-01-02 00:58:22 | INFO  | Task e606cee1-ca9a-47fe-bd0e-99f3002516b2 is in state STARTED 2026-01-02 00:58:22.332780 | orchestrator | 2026-01-02 00:58:22 | INFO  | Task 88215152-d7bf-465a-8a27-620f62a6487c is in state STARTED 2026-01-02 00:58:22.334278 | orchestrator | 2026-01-02 00:58:22 | INFO  | Task 53b94dfa-2853-4b9c-bdf4-59ee39731e37 is in state STARTED 2026-01-02 00:58:22.334600 | orchestrator | 2026-01-02 00:58:22 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:58:25.374185 | orchestrator | 2026-01-02 00:58:25 | INFO  | Task e606cee1-ca9a-47fe-bd0e-99f3002516b2 is in state STARTED 2026-01-02 00:58:25.375478 | orchestrator | 2026-01-02 00:58:25 | INFO  | Task 88215152-d7bf-465a-8a27-620f62a6487c is in state STARTED 2026-01-02 00:58:25.377310 | orchestrator | 2026-01-02 00:58:25 | INFO  | Task 53b94dfa-2853-4b9c-bdf4-59ee39731e37 is in state STARTED 2026-01-02 00:58:25.377354 | orchestrator | 2026-01-02 00:58:25 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:58:28.421264 | orchestrator | 2026-01-02 00:58:28 | INFO  | Task e606cee1-ca9a-47fe-bd0e-99f3002516b2 is in state STARTED 2026-01-02 00:58:28.421367 | orchestrator | 2026-01-02 00:58:28 | INFO  | Task 88215152-d7bf-465a-8a27-620f62a6487c is in state STARTED 2026-01-02 00:58:28.422209 | orchestrator | 2026-01-02 00:58:28 | INFO  | Task 53b94dfa-2853-4b9c-bdf4-59ee39731e37 is in state STARTED 2026-01-02 00:58:28.422240 | orchestrator | 2026-01-02 00:58:28 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:58:31.466220 | orchestrator | 2026-01-02 00:58:31 | INFO  | Task e606cee1-ca9a-47fe-bd0e-99f3002516b2 is in state STARTED 2026-01-02 00:58:31.466856 | orchestrator | 2026-01-02 00:58:31 | INFO  | Task 88215152-d7bf-465a-8a27-620f62a6487c is in state STARTED 2026-01-02 00:58:31.467908 | orchestrator | 2026-01-02 00:58:31 | INFO  | Task 53b94dfa-2853-4b9c-bdf4-59ee39731e37 is in state STARTED 2026-01-02 00:58:31.467947 | orchestrator | 2026-01-02 00:58:31 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:58:34.509221 | orchestrator | 2026-01-02 00:58:34 | INFO  | Task e606cee1-ca9a-47fe-bd0e-99f3002516b2 is in state STARTED 2026-01-02 00:58:34.510402 | orchestrator | 2026-01-02 00:58:34 | INFO  | Task 88215152-d7bf-465a-8a27-620f62a6487c is in state STARTED 2026-01-02 00:58:34.511958 | orchestrator | 2026-01-02 00:58:34 | INFO  | Task 53b94dfa-2853-4b9c-bdf4-59ee39731e37 is in state STARTED 2026-01-02 00:58:34.512020 | orchestrator | 2026-01-02 00:58:34 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:58:37.545923 | orchestrator | 2026-01-02 00:58:37 | INFO  | Task e606cee1-ca9a-47fe-bd0e-99f3002516b2 is in state STARTED 2026-01-02 00:58:37.547587 | orchestrator | 2026-01-02 00:58:37 | INFO  | Task 88215152-d7bf-465a-8a27-620f62a6487c is in state STARTED 2026-01-02 00:58:37.548546 | orchestrator | 2026-01-02 00:58:37 | INFO  | Task 53b94dfa-2853-4b9c-bdf4-59ee39731e37 is in state STARTED 2026-01-02 00:58:37.548596 | orchestrator | 2026-01-02 00:58:37 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:58:40.589971 | orchestrator | 2026-01-02 00:58:40 | INFO  | Task e606cee1-ca9a-47fe-bd0e-99f3002516b2 is in state STARTED 2026-01-02 00:58:40.591886 | orchestrator | 2026-01-02 00:58:40 | INFO  | Task 88215152-d7bf-465a-8a27-620f62a6487c is in state STARTED 2026-01-02 00:58:40.594064 | orchestrator | 2026-01-02 00:58:40 | INFO  | Task 53b94dfa-2853-4b9c-bdf4-59ee39731e37 is in state STARTED 2026-01-02 00:58:40.594534 | orchestrator | 2026-01-02 00:58:40 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:58:43.635482 | orchestrator | 2026-01-02 00:58:43 | INFO  | Task e606cee1-ca9a-47fe-bd0e-99f3002516b2 is in state STARTED 2026-01-02 00:58:43.636580 | orchestrator | 2026-01-02 00:58:43 | INFO  | Task 88215152-d7bf-465a-8a27-620f62a6487c is in state STARTED 2026-01-02 00:58:43.637819 | orchestrator | 2026-01-02 00:58:43 | INFO  | Task 53b94dfa-2853-4b9c-bdf4-59ee39731e37 is in state STARTED 2026-01-02 00:58:43.639514 | orchestrator | 2026-01-02 00:58:43 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:58:46.692553 | orchestrator | 2026-01-02 00:58:46 | INFO  | Task e606cee1-ca9a-47fe-bd0e-99f3002516b2 is in state STARTED 2026-01-02 00:58:46.694199 | orchestrator | 2026-01-02 00:58:46 | INFO  | Task 88215152-d7bf-465a-8a27-620f62a6487c is in state STARTED 2026-01-02 00:58:46.695402 | orchestrator | 2026-01-02 00:58:46 | INFO  | Task 53b94dfa-2853-4b9c-bdf4-59ee39731e37 is in state STARTED 2026-01-02 00:58:46.695659 | orchestrator | 2026-01-02 00:58:46 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:58:49.740964 | orchestrator | 2026-01-02 00:58:49 | INFO  | Task e606cee1-ca9a-47fe-bd0e-99f3002516b2 is in state STARTED 2026-01-02 00:58:49.743270 | orchestrator | 2026-01-02 00:58:49 | INFO  | Task 88215152-d7bf-465a-8a27-620f62a6487c is in state STARTED 2026-01-02 00:58:49.744994 | orchestrator | 2026-01-02 00:58:49 | INFO  | Task 53b94dfa-2853-4b9c-bdf4-59ee39731e37 is in state STARTED 2026-01-02 00:58:49.745044 | orchestrator | 2026-01-02 00:58:49 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:58:52.797308 | orchestrator | 2026-01-02 00:58:52 | INFO  | Task e606cee1-ca9a-47fe-bd0e-99f3002516b2 is in state STARTED 2026-01-02 00:58:52.798514 | orchestrator | 2026-01-02 00:58:52 | INFO  | Task 88215152-d7bf-465a-8a27-620f62a6487c is in state STARTED 2026-01-02 00:58:52.800486 | orchestrator | 2026-01-02 00:58:52 | INFO  | Task 53b94dfa-2853-4b9c-bdf4-59ee39731e37 is in state STARTED 2026-01-02 00:58:52.800529 | orchestrator | 2026-01-02 00:58:52 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:58:55.842101 | orchestrator | 2026-01-02 00:58:55 | INFO  | Task e606cee1-ca9a-47fe-bd0e-99f3002516b2 is in state STARTED 2026-01-02 00:58:55.843912 | orchestrator | 2026-01-02 00:58:55 | INFO  | Task 88215152-d7bf-465a-8a27-620f62a6487c is in state STARTED 2026-01-02 00:58:55.845652 | orchestrator | 2026-01-02 00:58:55 | INFO  | Task 53b94dfa-2853-4b9c-bdf4-59ee39731e37 is in state STARTED 2026-01-02 00:58:55.845694 | orchestrator | 2026-01-02 00:58:55 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:58:58.890472 | orchestrator | 2026-01-02 00:58:58 | INFO  | Task e606cee1-ca9a-47fe-bd0e-99f3002516b2 is in state STARTED 2026-01-02 00:58:58.891641 | orchestrator | 2026-01-02 00:58:58 | INFO  | Task 88215152-d7bf-465a-8a27-620f62a6487c is in state STARTED 2026-01-02 00:58:58.893099 | orchestrator | 2026-01-02 00:58:58 | INFO  | Task 53b94dfa-2853-4b9c-bdf4-59ee39731e37 is in state STARTED 2026-01-02 00:58:58.893128 | orchestrator | 2026-01-02 00:58:58 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:59:01.935909 | orchestrator | 2026-01-02 00:59:01 | INFO  | Task e606cee1-ca9a-47fe-bd0e-99f3002516b2 is in state STARTED 2026-01-02 00:59:01.937709 | orchestrator | 2026-01-02 00:59:01 | INFO  | Task 88215152-d7bf-465a-8a27-620f62a6487c is in state STARTED 2026-01-02 00:59:01.939666 | orchestrator | 2026-01-02 00:59:01 | INFO  | Task 53b94dfa-2853-4b9c-bdf4-59ee39731e37 is in state STARTED 2026-01-02 00:59:01.939713 | orchestrator | 2026-01-02 00:59:01 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:59:04.984968 | orchestrator | 2026-01-02 00:59:04 | INFO  | Task e606cee1-ca9a-47fe-bd0e-99f3002516b2 is in state STARTED 2026-01-02 00:59:04.985686 | orchestrator | 2026-01-02 00:59:04 | INFO  | Task 88215152-d7bf-465a-8a27-620f62a6487c is in state STARTED 2026-01-02 00:59:04.987481 | orchestrator | 2026-01-02 00:59:04 | INFO  | Task 53b94dfa-2853-4b9c-bdf4-59ee39731e37 is in state STARTED 2026-01-02 00:59:04.987513 | orchestrator | 2026-01-02 00:59:04 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:59:08.063922 | orchestrator | 2026-01-02 00:59:08 | INFO  | Task e606cee1-ca9a-47fe-bd0e-99f3002516b2 is in state STARTED 2026-01-02 00:59:08.065760 | orchestrator | 2026-01-02 00:59:08 | INFO  | Task 88215152-d7bf-465a-8a27-620f62a6487c is in state SUCCESS 2026-01-02 00:59:08.068021 | orchestrator | 2026-01-02 00:59:08.068078 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-01-02 00:59:08.068459 | orchestrator | 2.16.14 2026-01-02 00:59:08.068490 | orchestrator | 2026-01-02 00:59:08.068509 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2026-01-02 00:59:08.068529 | orchestrator | 2026-01-02 00:59:08.068548 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-01-02 00:59:08.068567 | orchestrator | Friday 02 January 2026 00:56:59 +0000 (0:00:00.497) 0:00:00.497 ******** 2026-01-02 00:59:08.068586 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-02 00:59:08.068607 | orchestrator | 2026-01-02 00:59:08.068624 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-01-02 00:59:08.068635 | orchestrator | Friday 02 January 2026 00:56:59 +0000 (0:00:00.486) 0:00:00.984 ******** 2026-01-02 00:59:08.068647 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:59:08.068658 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:59:08.068669 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:59:08.068718 | orchestrator | 2026-01-02 00:59:08.068738 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-01-02 00:59:08.068756 | orchestrator | Friday 02 January 2026 00:57:00 +0000 (0:00:00.656) 0:00:01.641 ******** 2026-01-02 00:59:08.068775 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:59:08.068793 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:59:08.068805 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:59:08.068815 | orchestrator | 2026-01-02 00:59:08.068826 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-01-02 00:59:08.068840 | orchestrator | Friday 02 January 2026 00:57:00 +0000 (0:00:00.225) 0:00:01.866 ******** 2026-01-02 00:59:08.068859 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:59:08.068877 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:59:08.068913 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:59:08.068932 | orchestrator | 2026-01-02 00:59:08.068951 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-01-02 00:59:08.068971 | orchestrator | Friday 02 January 2026 00:57:01 +0000 (0:00:00.733) 0:00:02.599 ******** 2026-01-02 00:59:08.068990 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:59:08.069009 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:59:08.069028 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:59:08.069047 | orchestrator | 2026-01-02 00:59:08.069063 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-01-02 00:59:08.069078 | orchestrator | Friday 02 January 2026 00:57:01 +0000 (0:00:00.269) 0:00:02.869 ******** 2026-01-02 00:59:08.069089 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:59:08.069099 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:59:08.069110 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:59:08.069121 | orchestrator | 2026-01-02 00:59:08.069132 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-01-02 00:59:08.069143 | orchestrator | Friday 02 January 2026 00:57:02 +0000 (0:00:00.260) 0:00:03.130 ******** 2026-01-02 00:59:08.069161 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:59:08.069178 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:59:08.069196 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:59:08.069216 | orchestrator | 2026-01-02 00:59:08.069234 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-01-02 00:59:08.069253 | orchestrator | Friday 02 January 2026 00:57:02 +0000 (0:00:00.264) 0:00:03.394 ******** 2026-01-02 00:59:08.069272 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:59:08.069292 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:59:08.069309 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:59:08.069322 | orchestrator | 2026-01-02 00:59:08.069341 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-01-02 00:59:08.069359 | orchestrator | Friday 02 January 2026 00:57:02 +0000 (0:00:00.368) 0:00:03.763 ******** 2026-01-02 00:59:08.069401 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:59:08.069421 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:59:08.069439 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:59:08.069458 | orchestrator | 2026-01-02 00:59:08.069495 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-01-02 00:59:08.069517 | orchestrator | Friday 02 January 2026 00:57:02 +0000 (0:00:00.257) 0:00:04.021 ******** 2026-01-02 00:59:08.069528 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-02 00:59:08.069545 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-02 00:59:08.069565 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-02 00:59:08.069584 | orchestrator | 2026-01-02 00:59:08.069604 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-01-02 00:59:08.069624 | orchestrator | Friday 02 January 2026 00:57:03 +0000 (0:00:00.591) 0:00:04.612 ******** 2026-01-02 00:59:08.069644 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:59:08.069663 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:59:08.069681 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:59:08.069698 | orchestrator | 2026-01-02 00:59:08.069709 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-01-02 00:59:08.069721 | orchestrator | Friday 02 January 2026 00:57:03 +0000 (0:00:00.416) 0:00:05.029 ******** 2026-01-02 00:59:08.069732 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-02 00:59:08.069744 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-02 00:59:08.069762 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-02 00:59:08.069782 | orchestrator | 2026-01-02 00:59:08.069795 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-01-02 00:59:08.069805 | orchestrator | Friday 02 January 2026 00:57:05 +0000 (0:00:01.981) 0:00:07.010 ******** 2026-01-02 00:59:08.069817 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-01-02 00:59:08.069836 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-01-02 00:59:08.069872 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-01-02 00:59:08.069890 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:59:08.069910 | orchestrator | 2026-01-02 00:59:08.070065 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-01-02 00:59:08.070087 | orchestrator | Friday 02 January 2026 00:57:06 +0000 (0:00:00.496) 0:00:07.507 ******** 2026-01-02 00:59:08.070107 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-01-02 00:59:08.070134 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-01-02 00:59:08.070154 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-01-02 00:59:08.070172 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:59:08.070190 | orchestrator | 2026-01-02 00:59:08.070209 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-01-02 00:59:08.070228 | orchestrator | Friday 02 January 2026 00:57:07 +0000 (0:00:00.643) 0:00:08.150 ******** 2026-01-02 00:59:08.070250 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-02 00:59:08.070272 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-02 00:59:08.070308 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-02 00:59:08.070328 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:59:08.070341 | orchestrator | 2026-01-02 00:59:08.070360 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-01-02 00:59:08.070420 | orchestrator | Friday 02 January 2026 00:57:07 +0000 (0:00:00.230) 0:00:08.381 ******** 2026-01-02 00:59:08.070446 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'c024e4e7d8cc', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-01-02 00:57:04.575106', 'end': '2026-01-02 00:57:04.613564', 'delta': '0:00:00.038458', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['c024e4e7d8cc'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-01-02 00:59:08.070482 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '1ea59dadd540', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-01-02 00:57:05.232140', 'end': '2026-01-02 00:57:05.280631', 'delta': '0:00:00.048491', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['1ea59dadd540'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-01-02 00:59:08.070575 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'c054884be633', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-01-02 00:57:05.777710', 'end': '2026-01-02 00:57:05.811385', 'delta': '0:00:00.033675', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['c054884be633'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-01-02 00:59:08.070601 | orchestrator | 2026-01-02 00:59:08.070621 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-01-02 00:59:08.070633 | orchestrator | Friday 02 January 2026 00:57:07 +0000 (0:00:00.174) 0:00:08.555 ******** 2026-01-02 00:59:08.070644 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:59:08.070656 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:59:08.070673 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:59:08.070692 | orchestrator | 2026-01-02 00:59:08.070710 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-01-02 00:59:08.070740 | orchestrator | Friday 02 January 2026 00:57:07 +0000 (0:00:00.384) 0:00:08.940 ******** 2026-01-02 00:59:08.070754 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-01-02 00:59:08.070773 | orchestrator | 2026-01-02 00:59:08.070792 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-01-02 00:59:08.070810 | orchestrator | Friday 02 January 2026 00:57:10 +0000 (0:00:02.146) 0:00:11.086 ******** 2026-01-02 00:59:08.070822 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:59:08.070833 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:59:08.070844 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:59:08.070854 | orchestrator | 2026-01-02 00:59:08.070866 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-01-02 00:59:08.070884 | orchestrator | Friday 02 January 2026 00:57:10 +0000 (0:00:00.293) 0:00:11.380 ******** 2026-01-02 00:59:08.070902 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:59:08.070920 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:59:08.070938 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:59:08.070957 | orchestrator | 2026-01-02 00:59:08.070979 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-01-02 00:59:08.070997 | orchestrator | Friday 02 January 2026 00:57:10 +0000 (0:00:00.401) 0:00:11.781 ******** 2026-01-02 00:59:08.071016 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:59:08.071035 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:59:08.071054 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:59:08.071072 | orchestrator | 2026-01-02 00:59:08.071091 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-01-02 00:59:08.071109 | orchestrator | Friday 02 January 2026 00:57:11 +0000 (0:00:00.468) 0:00:12.250 ******** 2026-01-02 00:59:08.071127 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:59:08.071148 | orchestrator | 2026-01-02 00:59:08.071167 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-01-02 00:59:08.071180 | orchestrator | Friday 02 January 2026 00:57:11 +0000 (0:00:00.129) 0:00:12.379 ******** 2026-01-02 00:59:08.071192 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:59:08.071210 | orchestrator | 2026-01-02 00:59:08.071229 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-01-02 00:59:08.071247 | orchestrator | Friday 02 January 2026 00:57:11 +0000 (0:00:00.225) 0:00:12.604 ******** 2026-01-02 00:59:08.071267 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:59:08.071285 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:59:08.071305 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:59:08.071324 | orchestrator | 2026-01-02 00:59:08.071337 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-01-02 00:59:08.071348 | orchestrator | Friday 02 January 2026 00:57:11 +0000 (0:00:00.292) 0:00:12.897 ******** 2026-01-02 00:59:08.071359 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:59:08.071370 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:59:08.071414 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:59:08.071434 | orchestrator | 2026-01-02 00:59:08.071453 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-01-02 00:59:08.071471 | orchestrator | Friday 02 January 2026 00:57:12 +0000 (0:00:00.301) 0:00:13.199 ******** 2026-01-02 00:59:08.071487 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:59:08.071503 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:59:08.071519 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:59:08.071535 | orchestrator | 2026-01-02 00:59:08.071553 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-01-02 00:59:08.071573 | orchestrator | Friday 02 January 2026 00:57:12 +0000 (0:00:00.477) 0:00:13.676 ******** 2026-01-02 00:59:08.071591 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:59:08.071602 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:59:08.071613 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:59:08.071624 | orchestrator | 2026-01-02 00:59:08.071635 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-01-02 00:59:08.071657 | orchestrator | Friday 02 January 2026 00:57:12 +0000 (0:00:00.320) 0:00:13.996 ******** 2026-01-02 00:59:08.071668 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:59:08.071679 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:59:08.071690 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:59:08.071701 | orchestrator | 2026-01-02 00:59:08.071711 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-01-02 00:59:08.071722 | orchestrator | Friday 02 January 2026 00:57:13 +0000 (0:00:00.314) 0:00:14.311 ******** 2026-01-02 00:59:08.071741 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:59:08.071755 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:59:08.071773 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:59:08.071857 | orchestrator | 2026-01-02 00:59:08.071884 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-01-02 00:59:08.071902 | orchestrator | Friday 02 January 2026 00:57:13 +0000 (0:00:00.291) 0:00:14.602 ******** 2026-01-02 00:59:08.071920 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:59:08.071937 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:59:08.071954 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:59:08.071971 | orchestrator | 2026-01-02 00:59:08.071989 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-01-02 00:59:08.072008 | orchestrator | Friday 02 January 2026 00:57:14 +0000 (0:00:00.465) 0:00:15.068 ******** 2026-01-02 00:59:08.072029 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--fa5ccc98--5ec0--5843--b525--cc12dffb9804-osd--block--fa5ccc98--5ec0--5843--b525--cc12dffb9804', 'dm-uuid-LVM-oDCsQFqAfSa6dNekR7EBkGs45lHB6rEjxEg56fF9bwXURWTj0WU6ut4LrqAmuF07'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-02 00:59:08.072050 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--1e1b73ff--0d48--5f4d--91db--a8c1f08fc0ce-osd--block--1e1b73ff--0d48--5f4d--91db--a8c1f08fc0ce', 'dm-uuid-LVM-rgPLZ8vXO2nPfvKWTpklEG3SJiaYe7YGxF8ENWvpzmak4GVoCkqMJeuWh4TUAQDw'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-02 00:59:08.072070 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-02 00:59:08.072091 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-02 00:59:08.072111 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-02 00:59:08.072145 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-02 00:59:08.072164 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-02 00:59:08.072254 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-02 00:59:08.072274 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--319da19b--b53c--570d--92cc--c377bf830026-osd--block--319da19b--b53c--570d--92cc--c377bf830026', 'dm-uuid-LVM-ToydGqMz0NdFJYSFD2nnthvxr0L1N1tYjsqzGrrOyxGznwoThZWKqX3aNqfblJT5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-02 00:59:08.072286 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-02 00:59:08.072298 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--aabdb1ab--3cea--5cae--90fa--5f0cfaabc1a0-osd--block--aabdb1ab--3cea--5cae--90fa--5f0cfaabc1a0', 'dm-uuid-LVM-N7La7drEyxNXHeLll3TwIuNRGF3K0bJub4Af73ag0HaEEuIXfs3i8QX2i65zWrmU'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-02 00:59:08.072310 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-02 00:59:08.072321 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-02 00:59:08.072332 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-02 00:59:08.072433 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_305350a8-4399-44d3-b2ea-bbf7b9eb7f90', 'scsi-SQEMU_QEMU_HARDDISK_305350a8-4399-44d3-b2ea-bbf7b9eb7f90'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_305350a8-4399-44d3-b2ea-bbf7b9eb7f90-part1', 'scsi-SQEMU_QEMU_HARDDISK_305350a8-4399-44d3-b2ea-bbf7b9eb7f90-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_305350a8-4399-44d3-b2ea-bbf7b9eb7f90-part14', 'scsi-SQEMU_QEMU_HARDDISK_305350a8-4399-44d3-b2ea-bbf7b9eb7f90-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_305350a8-4399-44d3-b2ea-bbf7b9eb7f90-part15', 'scsi-SQEMU_QEMU_HARDDISK_305350a8-4399-44d3-b2ea-bbf7b9eb7f90-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_305350a8-4399-44d3-b2ea-bbf7b9eb7f90-part16', 'scsi-SQEMU_QEMU_HARDDISK_305350a8-4399-44d3-b2ea-bbf7b9eb7f90-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-02 00:59:08.072453 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-02 00:59:08.072465 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--fa5ccc98--5ec0--5843--b525--cc12dffb9804-osd--block--fa5ccc98--5ec0--5843--b525--cc12dffb9804'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-j6yrGS-2HWP-4VVF-va30-HDvZ-1RQB-VvRL68', 'scsi-0QEMU_QEMU_HARDDISK_610525bf-123e-48f5-8f72-a088231f73d4', 'scsi-SQEMU_QEMU_HARDDISK_610525bf-123e-48f5-8f72-a088231f73d4'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-02 00:59:08.072479 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-02 00:59:08.072499 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--1e1b73ff--0d48--5f4d--91db--a8c1f08fc0ce-osd--block--1e1b73ff--0d48--5f4d--91db--a8c1f08fc0ce'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-yzDxoC-avzk-Rjpo-kCJw-Mmt6-wfd1-UiS9Nm', 'scsi-0QEMU_QEMU_HARDDISK_d0e027c6-7483-4a58-a550-b5020c348e91', 'scsi-SQEMU_QEMU_HARDDISK_d0e027c6-7483-4a58-a550-b5020c348e91'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-02 00:59:08.072511 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-02 00:59:08.072589 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_88e6ca38-e9bc-414f-be79-2564fe6ee507', 'scsi-SQEMU_QEMU_HARDDISK_88e6ca38-e9bc-414f-be79-2564fe6ee507'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-02 00:59:08.072617 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-02 00:59:08.072641 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-02-00-03-27-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-02 00:59:08.072660 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-02 00:59:08.072672 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:59:08.072683 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-02 00:59:08.072719 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_33c14286-c543-4ffd-bb9f-b1db90f604b2', 'scsi-SQEMU_QEMU_HARDDISK_33c14286-c543-4ffd-bb9f-b1db90f604b2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_33c14286-c543-4ffd-bb9f-b1db90f604b2-part1', 'scsi-SQEMU_QEMU_HARDDISK_33c14286-c543-4ffd-bb9f-b1db90f604b2-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_33c14286-c543-4ffd-bb9f-b1db90f604b2-part14', 'scsi-SQEMU_QEMU_HARDDISK_33c14286-c543-4ffd-bb9f-b1db90f604b2-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_33c14286-c543-4ffd-bb9f-b1db90f604b2-part15', 'scsi-SQEMU_QEMU_HARDDISK_33c14286-c543-4ffd-bb9f-b1db90f604b2-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_33c14286-c543-4ffd-bb9f-b1db90f604b2-part16', 'scsi-SQEMU_QEMU_HARDDISK_33c14286-c543-4ffd-bb9f-b1db90f604b2-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-02 00:59:08.072733 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--319da19b--b53c--570d--92cc--c377bf830026-osd--block--319da19b--b53c--570d--92cc--c377bf830026'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-fSKb5S-Nu4n-cIZx-pD2I-gqqM-M0Nc-VOHToN', 'scsi-0QEMU_QEMU_HARDDISK_a863269e-8a4c-456a-8159-1ce463f39daf', 'scsi-SQEMU_QEMU_HARDDISK_a863269e-8a4c-456a-8159-1ce463f39daf'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-02 00:59:08.072745 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--804dd052--7dd8--5ffa--9f76--70ebd20e36f7-osd--block--804dd052--7dd8--5ffa--9f76--70ebd20e36f7', 'dm-uuid-LVM-4qmPLn1HPIxZ6ZQiaCj89Um5tNbz0sJOm6PhWUzXHFPQIjgyz4hhCkw7KRfcZnKN'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-02 00:59:08.072757 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--aabdb1ab--3cea--5cae--90fa--5f0cfaabc1a0-osd--block--aabdb1ab--3cea--5cae--90fa--5f0cfaabc1a0'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-AVPnHs-1Dx7-4kF9-CcXu-Zlii-dAN1-E780FS', 'scsi-0QEMU_QEMU_HARDDISK_2fd5b446-fd37-4cff-9553-5df2f9404005', 'scsi-SQEMU_QEMU_HARDDISK_2fd5b446-fd37-4cff-9553-5df2f9404005'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-02 00:59:08.072776 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1ff5c3cd-25d4-4cde-ba93-f5b1aecf565f', 'scsi-SQEMU_QEMU_HARDDISK_1ff5c3cd-25d4-4cde-ba93-f5b1aecf565f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-02 00:59:08.072787 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8699efe3--2ea7--5359--bcef--4eac218b02a9-osd--block--8699efe3--2ea7--5359--bcef--4eac218b02a9', 'dm-uuid-LVM-v4tjCEpOdr47Lgc9wIGUVShY664D9DcD3Ev00n7LAoSYGXj51xMrUOA7qw9s8nOi'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-02 00:59:08.072818 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-02-00-03-33-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-02 00:59:08.072831 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-02 00:59:08.072842 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:59:08.072853 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-02 00:59:08.072865 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-02 00:59:08.072876 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-02 00:59:08.072887 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-02 00:59:08.072905 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-02 00:59:08.072917 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-02 00:59:08.072928 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-02 00:59:08.072954 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_551f95f5-89d5-4c4c-89d2-f5559c6efa3b', 'scsi-SQEMU_QEMU_HARDDISK_551f95f5-89d5-4c4c-89d2-f5559c6efa3b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_551f95f5-89d5-4c4c-89d2-f5559c6efa3b-part1', 'scsi-SQEMU_QEMU_HARDDISK_551f95f5-89d5-4c4c-89d2-f5559c6efa3b-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_551f95f5-89d5-4c4c-89d2-f5559c6efa3b-part14', 'scsi-SQEMU_QEMU_HARDDISK_551f95f5-89d5-4c4c-89d2-f5559c6efa3b-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_551f95f5-89d5-4c4c-89d2-f5559c6efa3b-part15', 'scsi-SQEMU_QEMU_HARDDISK_551f95f5-89d5-4c4c-89d2-f5559c6efa3b-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_551f95f5-89d5-4c4c-89d2-f5559c6efa3b-part16', 'scsi-SQEMU_QEMU_HARDDISK_551f95f5-89d5-4c4c-89d2-f5559c6efa3b-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-02 00:59:08.072967 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--804dd052--7dd8--5ffa--9f76--70ebd20e36f7-osd--block--804dd052--7dd8--5ffa--9f76--70ebd20e36f7'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-bHjTGW-M7Md-dGH2-1DRX-CahX-HQsY-ZSTrcJ', 'scsi-0QEMU_QEMU_HARDDISK_26e4f97c-d63e-4b12-851b-95c853c7feee', 'scsi-SQEMU_QEMU_HARDDISK_26e4f97c-d63e-4b12-851b-95c853c7feee'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-02 00:59:08.072993 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--8699efe3--2ea7--5359--bcef--4eac218b02a9-osd--block--8699efe3--2ea7--5359--bcef--4eac218b02a9'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-YQAhjU-OSZ2-cMKq-byOK-KbNp-3kb1-CEBdqs', 'scsi-0QEMU_QEMU_HARDDISK_afdcae1f-177b-4712-b40b-94f97a828de8', 'scsi-SQEMU_QEMU_HARDDISK_afdcae1f-177b-4712-b40b-94f97a828de8'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-02 00:59:08.073005 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ec88668e-8f3f-41b2-9ae8-d99cc1bb8c9e', 'scsi-SQEMU_QEMU_HARDDISK_ec88668e-8f3f-41b2-9ae8-d99cc1bb8c9e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-02 00:59:08.073029 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-02-00-03-36-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-02 00:59:08.073042 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:59:08.073053 | orchestrator | 2026-01-02 00:59:08.073064 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-01-02 00:59:08.073075 | orchestrator | Friday 02 January 2026 00:57:14 +0000 (0:00:00.509) 0:00:15.578 ******** 2026-01-02 00:59:08.073087 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--fa5ccc98--5ec0--5843--b525--cc12dffb9804-osd--block--fa5ccc98--5ec0--5843--b525--cc12dffb9804', 'dm-uuid-LVM-oDCsQFqAfSa6dNekR7EBkGs45lHB6rEjxEg56fF9bwXURWTj0WU6ut4LrqAmuF07'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:59:08.073100 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--1e1b73ff--0d48--5f4d--91db--a8c1f08fc0ce-osd--block--1e1b73ff--0d48--5f4d--91db--a8c1f08fc0ce', 'dm-uuid-LVM-rgPLZ8vXO2nPfvKWTpklEG3SJiaYe7YGxF8ENWvpzmak4GVoCkqMJeuWh4TUAQDw'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:59:08.073119 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:59:08.073130 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:59:08.073142 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:59:08.073161 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:59:08.073173 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:59:08.073185 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:59:08.073203 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:59:08.073321 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--319da19b--b53c--570d--92cc--c377bf830026-osd--block--319da19b--b53c--570d--92cc--c377bf830026', 'dm-uuid-LVM-ToydGqMz0NdFJYSFD2nnthvxr0L1N1tYjsqzGrrOyxGznwoThZWKqX3aNqfblJT5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:59:08.073347 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:59:08.073373 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--aabdb1ab--3cea--5cae--90fa--5f0cfaabc1a0-osd--block--aabdb1ab--3cea--5cae--90fa--5f0cfaabc1a0', 'dm-uuid-LVM-N7La7drEyxNXHeLll3TwIuNRGF3K0bJub4Af73ag0HaEEuIXfs3i8QX2i65zWrmU'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:59:08.073566 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_305350a8-4399-44d3-b2ea-bbf7b9eb7f90', 'scsi-SQEMU_QEMU_HARDDISK_305350a8-4399-44d3-b2ea-bbf7b9eb7f90'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_305350a8-4399-44d3-b2ea-bbf7b9eb7f90-part1', 'scsi-SQEMU_QEMU_HARDDISK_305350a8-4399-44d3-b2ea-bbf7b9eb7f90-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_305350a8-4399-44d3-b2ea-bbf7b9eb7f90-part14', 'scsi-SQEMU_QEMU_HARDDISK_305350a8-4399-44d3-b2ea-bbf7b9eb7f90-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_305350a8-4399-44d3-b2ea-bbf7b9eb7f90-part15', 'scsi-SQEMU_QEMU_HARDDISK_305350a8-4399-44d3-b2ea-bbf7b9eb7f90-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_305350a8-4399-44d3-b2ea-bbf7b9eb7f90-part16', 'scsi-SQEMU_QEMU_HARDDISK_305350a8-4399-44d3-b2ea-bbf7b9eb7f90-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:59:08.073611 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:59:08.073639 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--fa5ccc98--5ec0--5843--b525--cc12dffb9804-osd--block--fa5ccc98--5ec0--5843--b525--cc12dffb9804'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-j6yrGS-2HWP-4VVF-va30-HDvZ-1RQB-VvRL68', 'scsi-0QEMU_QEMU_HARDDISK_610525bf-123e-48f5-8f72-a088231f73d4', 'scsi-SQEMU_QEMU_HARDDISK_610525bf-123e-48f5-8f72-a088231f73d4'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:59:08.073650 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:59:08.073658 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--1e1b73ff--0d48--5f4d--91db--a8c1f08fc0ce-osd--block--1e1b73ff--0d48--5f4d--91db--a8c1f08fc0ce'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-yzDxoC-avzk-Rjpo-kCJw-Mmt6-wfd1-UiS9Nm', 'scsi-0QEMU_QEMU_HARDDISK_d0e027c6-7483-4a58-a550-b5020c348e91', 'scsi-SQEMU_QEMU_HARDDISK_d0e027c6-7483-4a58-a550-b5020c348e91'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:59:08.073673 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:59:08.073681 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_88e6ca38-e9bc-414f-be79-2564fe6ee507', 'scsi-SQEMU_QEMU_HARDDISK_88e6ca38-e9bc-414f-be79-2564fe6ee507'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:59:08.073690 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:59:08.073709 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-02-00-03-27-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:59:08.073718 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:59:08.073734 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:59:08.073742 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:59:08.073750 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:59:08.073759 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:59:08.073781 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_33c14286-c543-4ffd-bb9f-b1db90f604b2', 'scsi-SQEMU_QEMU_HARDDISK_33c14286-c543-4ffd-bb9f-b1db90f604b2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_33c14286-c543-4ffd-bb9f-b1db90f604b2-part1', 'scsi-SQEMU_QEMU_HARDDISK_33c14286-c543-4ffd-bb9f-b1db90f604b2-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_33c14286-c543-4ffd-bb9f-b1db90f604b2-part14', 'scsi-SQEMU_QEMU_HARDDISK_33c14286-c543-4ffd-bb9f-b1db90f604b2-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_33c14286-c543-4ffd-bb9f-b1db90f604b2-part15', 'scsi-SQEMU_QEMU_HARDDISK_33c14286-c543-4ffd-bb9f-b1db90f604b2-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_33c14286-c543-4ffd-bb9f-b1db90f604b2-part16', 'scsi-SQEMU_QEMU_HARDDISK_33c14286-c543-4ffd-bb9f-b1db90f604b2-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:59:08.073797 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--319da19b--b53c--570d--92cc--c377bf830026-osd--block--319da19b--b53c--570d--92cc--c377bf830026'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-fSKb5S-Nu4n-cIZx-pD2I-gqqM-M0Nc-VOHToN', 'scsi-0QEMU_QEMU_HARDDISK_a863269e-8a4c-456a-8159-1ce463f39daf', 'scsi-SQEMU_QEMU_HARDDISK_a863269e-8a4c-456a-8159-1ce463f39daf'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:59:08.073806 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--aabdb1ab--3cea--5cae--90fa--5f0cfaabc1a0-osd--block--aabdb1ab--3cea--5cae--90fa--5f0cfaabc1a0'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-AVPnHs-1Dx7-4kF9-CcXu-Zlii-dAN1-E780FS', 'scsi-0QEMU_QEMU_HARDDISK_2fd5b446-fd37-4cff-9553-5df2f9404005', 'scsi-SQEMU_QEMU_HARDDISK_2fd5b446-fd37-4cff-9553-5df2f9404005'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:59:08.073815 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1ff5c3cd-25d4-4cde-ba93-f5b1aecf565f', 'scsi-SQEMU_QEMU_HARDDISK_1ff5c3cd-25d4-4cde-ba93-f5b1aecf565f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:59:08.073833 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-02-00-03-33-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:59:08.073842 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:59:08.073850 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--804dd052--7dd8--5ffa--9f76--70ebd20e36f7-osd--block--804dd052--7dd8--5ffa--9f76--70ebd20e36f7', 'dm-uuid-LVM-4qmPLn1HPIxZ6ZQiaCj89Um5tNbz0sJOm6PhWUzXHFPQIjgyz4hhCkw7KRfcZnKN'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:59:08.073864 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8699efe3--2ea7--5359--bcef--4eac218b02a9-osd--block--8699efe3--2ea7--5359--bcef--4eac218b02a9', 'dm-uuid-LVM-v4tjCEpOdr47Lgc9wIGUVShY664D9DcD3Ev00n7LAoSYGXj51xMrUOA7qw9s8nOi'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:59:08.073872 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:59:08.073881 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:59:08.073889 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:59:08.073908 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:59:08.073917 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:59:08.073930 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:59:08.073939 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:59:08.073947 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:59:08.073966 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_551f95f5-89d5-4c4c-89d2-f5559c6efa3b', 'scsi-SQEMU_QEMU_HARDDISK_551f95f5-89d5-4c4c-89d2-f5559c6efa3b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_551f95f5-89d5-4c4c-89d2-f5559c6efa3b-part1', 'scsi-SQEMU_QEMU_HARDDISK_551f95f5-89d5-4c4c-89d2-f5559c6efa3b-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_551f95f5-89d5-4c4c-89d2-f5559c6efa3b-part14', 'scsi-SQEMU_QEMU_HARDDISK_551f95f5-89d5-4c4c-89d2-f5559c6efa3b-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_551f95f5-89d5-4c4c-89d2-f5559c6efa3b-part15', 'scsi-SQEMU_QEMU_HARDDISK_551f95f5-89d5-4c4c-89d2-f5559c6efa3b-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_551f95f5-89d5-4c4c-89d2-f5559c6efa3b-part16', 'scsi-SQEMU_QEMU_HARDDISK_551f95f5-89d5-4c4c-89d2-f5559c6efa3b-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:59:08.073984 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--804dd052--7dd8--5ffa--9f76--70ebd20e36f7-osd--block--804dd052--7dd8--5ffa--9f76--70ebd20e36f7'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-bHjTGW-M7Md-dGH2-1DRX-CahX-HQsY-ZSTrcJ', 'scsi-0QEMU_QEMU_HARDDISK_26e4f97c-d63e-4b12-851b-95c853c7feee', 'scsi-SQEMU_QEMU_HARDDISK_26e4f97c-d63e-4b12-851b-95c853c7feee'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:59:08.073993 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--8699efe3--2ea7--5359--bcef--4eac218b02a9-osd--block--8699efe3--2ea7--5359--bcef--4eac218b02a9'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-YQAhjU-OSZ2-cMKq-byOK-KbNp-3kb1-CEBdqs', 'scsi-0QEMU_QEMU_HARDDISK_afdcae1f-177b-4712-b40b-94f97a828de8', 'scsi-SQEMU_QEMU_HARDDISK_afdcae1f-177b-4712-b40b-94f97a828de8'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:59:08.074001 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ec88668e-8f3f-41b2-9ae8-d99cc1bb8c9e', 'scsi-SQEMU_QEMU_HARDDISK_ec88668e-8f3f-41b2-9ae8-d99cc1bb8c9e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:59:08.074071 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-02-00-03-36-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-02 00:59:08.074089 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:59:08.074097 | orchestrator | 2026-01-02 00:59:08.074106 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-01-02 00:59:08.074115 | orchestrator | Friday 02 January 2026 00:57:15 +0000 (0:00:00.584) 0:00:16.162 ******** 2026-01-02 00:59:08.074123 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:59:08.074132 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:59:08.074140 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:59:08.074148 | orchestrator | 2026-01-02 00:59:08.074156 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-01-02 00:59:08.074164 | orchestrator | Friday 02 January 2026 00:57:15 +0000 (0:00:00.682) 0:00:16.844 ******** 2026-01-02 00:59:08.074172 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:59:08.074180 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:59:08.074188 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:59:08.074196 | orchestrator | 2026-01-02 00:59:08.074204 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-01-02 00:59:08.074212 | orchestrator | Friday 02 January 2026 00:57:16 +0000 (0:00:00.456) 0:00:17.300 ******** 2026-01-02 00:59:08.074230 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:59:08.074238 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:59:08.074254 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:59:08.074263 | orchestrator | 2026-01-02 00:59:08.074271 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-01-02 00:59:08.074279 | orchestrator | Friday 02 January 2026 00:57:16 +0000 (0:00:00.644) 0:00:17.945 ******** 2026-01-02 00:59:08.074287 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:59:08.074295 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:59:08.074303 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:59:08.074311 | orchestrator | 2026-01-02 00:59:08.074319 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-01-02 00:59:08.074327 | orchestrator | Friday 02 January 2026 00:57:17 +0000 (0:00:00.309) 0:00:18.254 ******** 2026-01-02 00:59:08.074335 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:59:08.074343 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:59:08.074350 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:59:08.074358 | orchestrator | 2026-01-02 00:59:08.074366 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-01-02 00:59:08.074375 | orchestrator | Friday 02 January 2026 00:57:17 +0000 (0:00:00.401) 0:00:18.656 ******** 2026-01-02 00:59:08.074405 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:59:08.074414 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:59:08.074422 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:59:08.074430 | orchestrator | 2026-01-02 00:59:08.074438 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-01-02 00:59:08.074446 | orchestrator | Friday 02 January 2026 00:57:18 +0000 (0:00:00.479) 0:00:19.135 ******** 2026-01-02 00:59:08.074454 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-01-02 00:59:08.074462 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-01-02 00:59:08.074470 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-01-02 00:59:08.074478 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-01-02 00:59:08.074486 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-01-02 00:59:08.074494 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-01-02 00:59:08.074502 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-01-02 00:59:08.074509 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-01-02 00:59:08.074517 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-01-02 00:59:08.074525 | orchestrator | 2026-01-02 00:59:08.074533 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-01-02 00:59:08.074547 | orchestrator | Friday 02 January 2026 00:57:18 +0000 (0:00:00.809) 0:00:19.945 ******** 2026-01-02 00:59:08.074555 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-01-02 00:59:08.074563 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-01-02 00:59:08.074571 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-01-02 00:59:08.074579 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:59:08.074587 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-01-02 00:59:08.074595 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-01-02 00:59:08.074603 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-01-02 00:59:08.074611 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:59:08.074619 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-01-02 00:59:08.074626 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-01-02 00:59:08.074634 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-01-02 00:59:08.074642 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:59:08.074650 | orchestrator | 2026-01-02 00:59:08.074658 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-01-02 00:59:08.074666 | orchestrator | Friday 02 January 2026 00:57:19 +0000 (0:00:00.340) 0:00:20.285 ******** 2026-01-02 00:59:08.074675 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-02 00:59:08.074683 | orchestrator | 2026-01-02 00:59:08.074695 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-01-02 00:59:08.074704 | orchestrator | Friday 02 January 2026 00:57:19 +0000 (0:00:00.654) 0:00:20.939 ******** 2026-01-02 00:59:08.074718 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:59:08.074726 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:59:08.074734 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:59:08.074742 | orchestrator | 2026-01-02 00:59:08.074750 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-01-02 00:59:08.074758 | orchestrator | Friday 02 January 2026 00:57:20 +0000 (0:00:00.306) 0:00:21.246 ******** 2026-01-02 00:59:08.074766 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:59:08.074774 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:59:08.074782 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:59:08.074790 | orchestrator | 2026-01-02 00:59:08.074798 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-01-02 00:59:08.074806 | orchestrator | Friday 02 January 2026 00:57:20 +0000 (0:00:00.339) 0:00:21.586 ******** 2026-01-02 00:59:08.074814 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:59:08.074822 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:59:08.074830 | orchestrator | skipping: [testbed-node-5] 2026-01-02 00:59:08.074838 | orchestrator | 2026-01-02 00:59:08.074846 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-01-02 00:59:08.074854 | orchestrator | Friday 02 January 2026 00:57:20 +0000 (0:00:00.311) 0:00:21.897 ******** 2026-01-02 00:59:08.074862 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:59:08.074870 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:59:08.074878 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:59:08.074886 | orchestrator | 2026-01-02 00:59:08.074894 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-01-02 00:59:08.074903 | orchestrator | Friday 02 January 2026 00:57:21 +0000 (0:00:00.825) 0:00:22.723 ******** 2026-01-02 00:59:08.074911 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-02 00:59:08.074919 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-02 00:59:08.074926 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-02 00:59:08.074934 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:59:08.074942 | orchestrator | 2026-01-02 00:59:08.074950 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-01-02 00:59:08.074964 | orchestrator | Friday 02 January 2026 00:57:22 +0000 (0:00:00.357) 0:00:23.081 ******** 2026-01-02 00:59:08.074972 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-02 00:59:08.074980 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-02 00:59:08.074988 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-02 00:59:08.074996 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:59:08.075004 | orchestrator | 2026-01-02 00:59:08.075012 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-01-02 00:59:08.075020 | orchestrator | Friday 02 January 2026 00:57:22 +0000 (0:00:00.357) 0:00:23.438 ******** 2026-01-02 00:59:08.075028 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-02 00:59:08.075036 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-02 00:59:08.075043 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-02 00:59:08.075051 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:59:08.075059 | orchestrator | 2026-01-02 00:59:08.075067 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-01-02 00:59:08.075075 | orchestrator | Friday 02 January 2026 00:57:22 +0000 (0:00:00.377) 0:00:23.816 ******** 2026-01-02 00:59:08.075083 | orchestrator | ok: [testbed-node-3] 2026-01-02 00:59:08.075091 | orchestrator | ok: [testbed-node-4] 2026-01-02 00:59:08.075099 | orchestrator | ok: [testbed-node-5] 2026-01-02 00:59:08.075107 | orchestrator | 2026-01-02 00:59:08.075115 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-01-02 00:59:08.075123 | orchestrator | Friday 02 January 2026 00:57:23 +0000 (0:00:00.295) 0:00:24.112 ******** 2026-01-02 00:59:08.075131 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-01-02 00:59:08.075139 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-01-02 00:59:08.075147 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-01-02 00:59:08.075155 | orchestrator | 2026-01-02 00:59:08.075163 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-01-02 00:59:08.075171 | orchestrator | Friday 02 January 2026 00:57:23 +0000 (0:00:00.497) 0:00:24.609 ******** 2026-01-02 00:59:08.075179 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-02 00:59:08.075187 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-02 00:59:08.075196 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-02 00:59:08.075203 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-01-02 00:59:08.075211 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-01-02 00:59:08.075219 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-01-02 00:59:08.075228 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-01-02 00:59:08.075235 | orchestrator | 2026-01-02 00:59:08.075243 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-01-02 00:59:08.075251 | orchestrator | Friday 02 January 2026 00:57:24 +0000 (0:00:00.914) 0:00:25.524 ******** 2026-01-02 00:59:08.075259 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-02 00:59:08.075267 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-02 00:59:08.075275 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-02 00:59:08.075283 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-01-02 00:59:08.075291 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-01-02 00:59:08.075304 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-01-02 00:59:08.075316 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-01-02 00:59:08.075330 | orchestrator | 2026-01-02 00:59:08.075338 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2026-01-02 00:59:08.075346 | orchestrator | Friday 02 January 2026 00:57:26 +0000 (0:00:01.836) 0:00:27.360 ******** 2026-01-02 00:59:08.075354 | orchestrator | skipping: [testbed-node-3] 2026-01-02 00:59:08.075362 | orchestrator | skipping: [testbed-node-4] 2026-01-02 00:59:08.075370 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2026-01-02 00:59:08.075393 | orchestrator | 2026-01-02 00:59:08.075401 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2026-01-02 00:59:08.075410 | orchestrator | Friday 02 January 2026 00:57:26 +0000 (0:00:00.360) 0:00:27.721 ******** 2026-01-02 00:59:08.075419 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-01-02 00:59:08.075429 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-01-02 00:59:08.075437 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-01-02 00:59:08.075446 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-01-02 00:59:08.075454 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-01-02 00:59:08.075462 | orchestrator | 2026-01-02 00:59:08.075470 | orchestrator | TASK [generate keys] *********************************************************** 2026-01-02 00:59:08.075479 | orchestrator | Friday 02 January 2026 00:58:12 +0000 (0:00:45.370) 0:01:13.092 ******** 2026-01-02 00:59:08.075487 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-02 00:59:08.075495 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-02 00:59:08.075503 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-02 00:59:08.075511 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-02 00:59:08.075519 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-02 00:59:08.075527 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-02 00:59:08.075535 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2026-01-02 00:59:08.075543 | orchestrator | 2026-01-02 00:59:08.075551 | orchestrator | TASK [get keys from monitors] ************************************************** 2026-01-02 00:59:08.075559 | orchestrator | Friday 02 January 2026 00:58:36 +0000 (0:00:24.842) 0:01:37.934 ******** 2026-01-02 00:59:08.075567 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-02 00:59:08.075575 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-02 00:59:08.075583 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-02 00:59:08.075591 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-02 00:59:08.075606 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-02 00:59:08.075615 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-02 00:59:08.075623 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-01-02 00:59:08.075631 | orchestrator | 2026-01-02 00:59:08.075639 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2026-01-02 00:59:08.075647 | orchestrator | Friday 02 January 2026 00:58:49 +0000 (0:00:12.500) 0:01:50.435 ******** 2026-01-02 00:59:08.075655 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-02 00:59:08.075663 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-02 00:59:08.075671 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-02 00:59:08.075683 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-02 00:59:08.075691 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-02 00:59:08.075704 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-02 00:59:08.075712 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-02 00:59:08.075720 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-02 00:59:08.075728 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-02 00:59:08.075737 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-02 00:59:08.075745 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-02 00:59:08.075753 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-02 00:59:08.075760 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-02 00:59:08.075768 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-02 00:59:08.075776 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-02 00:59:08.075784 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-02 00:59:08.075792 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-02 00:59:08.075800 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-02 00:59:08.075808 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2026-01-02 00:59:08.075817 | orchestrator | 2026-01-02 00:59:08.075825 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-02 00:59:08.075833 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-01-02 00:59:08.075842 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-01-02 00:59:08.075850 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-01-02 00:59:08.075859 | orchestrator | 2026-01-02 00:59:08.075867 | orchestrator | 2026-01-02 00:59:08.075875 | orchestrator | 2026-01-02 00:59:08.075883 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-02 00:59:08.075891 | orchestrator | Friday 02 January 2026 00:59:07 +0000 (0:00:17.722) 0:02:08.157 ******** 2026-01-02 00:59:08.075899 | orchestrator | =============================================================================== 2026-01-02 00:59:08.075907 | orchestrator | create openstack pool(s) ----------------------------------------------- 45.37s 2026-01-02 00:59:08.075915 | orchestrator | generate keys ---------------------------------------------------------- 24.84s 2026-01-02 00:59:08.075923 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 17.72s 2026-01-02 00:59:08.075936 | orchestrator | get keys from monitors ------------------------------------------------- 12.50s 2026-01-02 00:59:08.075944 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 2.15s 2026-01-02 00:59:08.075952 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 1.98s 2026-01-02 00:59:08.075960 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 1.84s 2026-01-02 00:59:08.075968 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 0.91s 2026-01-02 00:59:08.075976 | orchestrator | ceph-facts : Set_fact _radosgw_address to radosgw_address --------------- 0.83s 2026-01-02 00:59:08.075984 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 0.81s 2026-01-02 00:59:08.075992 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.73s 2026-01-02 00:59:08.076000 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.68s 2026-01-02 00:59:08.076008 | orchestrator | ceph-facts : Check if it is atomic host --------------------------------- 0.66s 2026-01-02 00:59:08.076016 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.65s 2026-01-02 00:59:08.076023 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.64s 2026-01-02 00:59:08.076031 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 0.64s 2026-01-02 00:59:08.076039 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.59s 2026-01-02 00:59:08.076047 | orchestrator | ceph-facts : Set_fact devices generate device list when osd_auto_discovery --- 0.58s 2026-01-02 00:59:08.076055 | orchestrator | ceph-facts : Collect existed devices ------------------------------------ 0.51s 2026-01-02 00:59:08.076063 | orchestrator | ceph-facts : Set_fact rgw_instances ------------------------------------- 0.50s 2026-01-02 00:59:08.076071 | orchestrator | 2026-01-02 00:59:08 | INFO  | Task 53b94dfa-2853-4b9c-bdf4-59ee39731e37 is in state STARTED 2026-01-02 00:59:08.076079 | orchestrator | 2026-01-02 00:59:08 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:59:11.124220 | orchestrator | 2026-01-02 00:59:11 | INFO  | Task e606cee1-ca9a-47fe-bd0e-99f3002516b2 is in state STARTED 2026-01-02 00:59:11.128021 | orchestrator | 2026-01-02 00:59:11 | INFO  | Task 53b94dfa-2853-4b9c-bdf4-59ee39731e37 is in state STARTED 2026-01-02 00:59:11.130502 | orchestrator | 2026-01-02 00:59:11 | INFO  | Task 08808bed-091a-4cc8-b86b-5d44d61a63ab is in state STARTED 2026-01-02 00:59:11.130579 | orchestrator | 2026-01-02 00:59:11 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:59:14.187357 | orchestrator | 2026-01-02 00:59:14 | INFO  | Task e606cee1-ca9a-47fe-bd0e-99f3002516b2 is in state STARTED 2026-01-02 00:59:14.189434 | orchestrator | 2026-01-02 00:59:14 | INFO  | Task 53b94dfa-2853-4b9c-bdf4-59ee39731e37 is in state STARTED 2026-01-02 00:59:14.191750 | orchestrator | 2026-01-02 00:59:14 | INFO  | Task 08808bed-091a-4cc8-b86b-5d44d61a63ab is in state STARTED 2026-01-02 00:59:14.192108 | orchestrator | 2026-01-02 00:59:14 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:59:17.230621 | orchestrator | 2026-01-02 00:59:17.231067 | orchestrator | 2026-01-02 00:59:17.231090 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-02 00:59:17.231100 | orchestrator | 2026-01-02 00:59:17.231109 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-02 00:59:17.231117 | orchestrator | Friday 02 January 2026 00:58:14 +0000 (0:00:00.251) 0:00:00.251 ******** 2026-01-02 00:59:17.231126 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:59:17.231135 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:59:17.231143 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:59:17.231151 | orchestrator | 2026-01-02 00:59:17.231159 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-02 00:59:17.231168 | orchestrator | Friday 02 January 2026 00:58:14 +0000 (0:00:00.286) 0:00:00.537 ******** 2026-01-02 00:59:17.231196 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-01-02 00:59:17.231206 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-01-02 00:59:17.231213 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-01-02 00:59:17.231221 | orchestrator | 2026-01-02 00:59:17.231229 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2026-01-02 00:59:17.231237 | orchestrator | 2026-01-02 00:59:17.231245 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-01-02 00:59:17.231253 | orchestrator | Friday 02 January 2026 00:58:14 +0000 (0:00:00.308) 0:00:00.846 ******** 2026-01-02 00:59:17.231262 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:59:17.231271 | orchestrator | 2026-01-02 00:59:17.231279 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2026-01-02 00:59:17.231286 | orchestrator | Friday 02 January 2026 00:58:15 +0000 (0:00:00.419) 0:00:01.266 ******** 2026-01-02 00:59:17.231299 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-01-02 00:59:17.231312 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-01-02 00:59:17.231349 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-01-02 00:59:17.231517 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-02 00:59:17.231537 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-02 00:59:17.231547 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-02 00:59:17.231556 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-02 00:59:17.231979 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-02 00:59:17.232003 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-02 00:59:17.232021 | orchestrator | 2026-01-02 00:59:17.232030 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2026-01-02 00:59:17.232068 | orchestrator | Friday 02 January 2026 00:58:17 +0000 (0:00:01.799) 0:00:03.065 ******** 2026-01-02 00:59:17.232078 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:59:17.232087 | orchestrator | 2026-01-02 00:59:17.232095 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2026-01-02 00:59:17.232103 | orchestrator | Friday 02 January 2026 00:58:17 +0000 (0:00:00.107) 0:00:03.173 ******** 2026-01-02 00:59:17.232111 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:59:17.232120 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:59:17.232127 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:59:17.232136 | orchestrator | 2026-01-02 00:59:17.232144 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2026-01-02 00:59:17.232152 | orchestrator | Friday 02 January 2026 00:58:17 +0000 (0:00:00.360) 0:00:03.533 ******** 2026-01-02 00:59:17.232160 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-02 00:59:17.232168 | orchestrator | 2026-01-02 00:59:17.232176 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-01-02 00:59:17.232184 | orchestrator | Friday 02 January 2026 00:58:18 +0000 (0:00:00.781) 0:00:04.315 ******** 2026-01-02 00:59:17.232192 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 00:59:17.232200 | orchestrator | 2026-01-02 00:59:17.232208 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2026-01-02 00:59:17.232216 | orchestrator | Friday 02 January 2026 00:58:18 +0000 (0:00:00.476) 0:00:04.792 ******** 2026-01-02 00:59:17.232225 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-01-02 00:59:17.232235 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-01-02 00:59:17.232272 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-01-02 00:59:17.232307 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-02 00:59:17.232317 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-02 00:59:17.232325 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-02 00:59:17.232334 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-02 00:59:17.232342 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-02 00:59:17.232360 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-02 00:59:17.232403 | orchestrator | 2026-01-02 00:59:17.232412 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2026-01-02 00:59:17.232420 | orchestrator | Friday 02 January 2026 00:58:22 +0000 (0:00:03.405) 0:00:08.197 ******** 2026-01-02 00:59:17.232455 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-01-02 00:59:17.232466 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-02 00:59:17.232475 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-02 00:59:17.232483 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:59:17.232492 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-01-02 00:59:17.232512 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-02 00:59:17.232546 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-02 00:59:17.232555 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:59:17.232564 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-01-02 00:59:17.232573 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-02 00:59:17.232581 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-02 00:59:17.232596 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:59:17.232606 | orchestrator | 2026-01-02 00:59:17.232616 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2026-01-02 00:59:17.232625 | orchestrator | Friday 02 January 2026 00:58:22 +0000 (0:00:00.554) 0:00:08.752 ******** 2026-01-02 00:59:17.232640 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-01-02 00:59:17.232673 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-02 00:59:17.232684 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-02 00:59:17.232694 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:59:17.232704 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-01-02 00:59:17.232714 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-02 00:59:17.232738 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-02 00:59:17.232747 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:59:17.232780 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-01-02 00:59:17.232791 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-02 00:59:17.232799 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-02 00:59:17.232808 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:59:17.232816 | orchestrator | 2026-01-02 00:59:17.232824 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2026-01-02 00:59:17.232832 | orchestrator | Friday 02 January 2026 00:58:23 +0000 (0:00:00.751) 0:00:09.503 ******** 2026-01-02 00:59:17.232841 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-01-02 00:59:17.232859 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-01-02 00:59:17.232891 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_n2026-01-02 00:59:17 | INFO  | Task e606cee1-ca9a-47fe-bd0e-99f3002516b2 is in state SUCCESS 2026-01-02 00:59:17.232902 | orchestrator | ame': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-01-02 00:59:17.232911 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-02 00:59:17.232920 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-02 00:59:17.232933 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-02 00:59:17.232945 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-02 00:59:17.232974 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-02 00:59:17.232985 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-02 00:59:17.232993 | orchestrator | 2026-01-02 00:59:17.233001 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2026-01-02 00:59:17.233009 | orchestrator | Friday 02 January 2026 00:58:26 +0000 (0:00:03.183) 0:00:12.687 ******** 2026-01-02 00:59:17.233018 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-01-02 00:59:17.233032 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-02 00:59:17.233045 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-01-02 00:59:17.233075 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-02 00:59:17.233085 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-01-02 00:59:17.233094 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-02 00:59:17.233109 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-02 00:59:17.233117 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-02 00:59:17.233129 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-02 00:59:17.233138 | orchestrator | 2026-01-02 00:59:17.233146 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2026-01-02 00:59:17.233154 | orchestrator | Friday 02 January 2026 00:58:32 +0000 (0:00:05.808) 0:00:18.495 ******** 2026-01-02 00:59:17.233162 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:59:17.233170 | orchestrator | changed: [testbed-node-1] 2026-01-02 00:59:17.233178 | orchestrator | changed: [testbed-node-2] 2026-01-02 00:59:17.233186 | orchestrator | 2026-01-02 00:59:17.233194 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2026-01-02 00:59:17.233202 | orchestrator | Friday 02 January 2026 00:58:34 +0000 (0:00:01.538) 0:00:20.034 ******** 2026-01-02 00:59:17.233211 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:59:17.233219 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:59:17.233249 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:59:17.233259 | orchestrator | 2026-01-02 00:59:17.233267 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2026-01-02 00:59:17.233275 | orchestrator | Friday 02 January 2026 00:58:34 +0000 (0:00:00.534) 0:00:20.569 ******** 2026-01-02 00:59:17.233283 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:59:17.233291 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:59:17.233298 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:59:17.233306 | orchestrator | 2026-01-02 00:59:17.233314 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2026-01-02 00:59:17.233322 | orchestrator | Friday 02 January 2026 00:58:34 +0000 (0:00:00.294) 0:00:20.863 ******** 2026-01-02 00:59:17.233330 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:59:17.233338 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:59:17.233346 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:59:17.233354 | orchestrator | 2026-01-02 00:59:17.233362 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2026-01-02 00:59:17.233385 | orchestrator | Friday 02 January 2026 00:58:35 +0000 (0:00:00.522) 0:00:21.386 ******** 2026-01-02 00:59:17.233400 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-01-02 00:59:17.233409 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-02 00:59:17.233417 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-02 00:59:17.233426 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:59:17.233461 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-01-02 00:59:17.233472 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-02 00:59:17.233486 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-02 00:59:17.233495 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:59:17.233503 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-01-02 00:59:17.233512 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-02 00:59:17.233524 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-02 00:59:17.233533 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:59:17.233541 | orchestrator | 2026-01-02 00:59:17.233549 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-01-02 00:59:17.233557 | orchestrator | Friday 02 January 2026 00:58:36 +0000 (0:00:00.687) 0:00:22.074 ******** 2026-01-02 00:59:17.233565 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:59:17.233573 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:59:17.233581 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:59:17.233589 | orchestrator | 2026-01-02 00:59:17.233596 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2026-01-02 00:59:17.233605 | orchestrator | Friday 02 January 2026 00:58:36 +0000 (0:00:00.310) 0:00:22.385 ******** 2026-01-02 00:59:17.233634 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-01-02 00:59:17.233649 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-01-02 00:59:17.233657 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-01-02 00:59:17.233665 | orchestrator | 2026-01-02 00:59:17.233673 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2026-01-02 00:59:17.233681 | orchestrator | Friday 02 January 2026 00:58:38 +0000 (0:00:01.699) 0:00:24.084 ******** 2026-01-02 00:59:17.233689 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-02 00:59:17.233697 | orchestrator | 2026-01-02 00:59:17.233705 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2026-01-02 00:59:17.233713 | orchestrator | Friday 02 January 2026 00:58:39 +0000 (0:00:00.906) 0:00:24.990 ******** 2026-01-02 00:59:17.233722 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:59:17.233730 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:59:17.233738 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:59:17.233746 | orchestrator | 2026-01-02 00:59:17.233753 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2026-01-02 00:59:17.233762 | orchestrator | Friday 02 January 2026 00:58:39 +0000 (0:00:00.820) 0:00:25.811 ******** 2026-01-02 00:59:17.233770 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-01-02 00:59:17.233778 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-01-02 00:59:17.233785 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-02 00:59:17.233793 | orchestrator | 2026-01-02 00:59:17.233801 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2026-01-02 00:59:17.233809 | orchestrator | Friday 02 January 2026 00:58:40 +0000 (0:00:01.020) 0:00:26.831 ******** 2026-01-02 00:59:17.233821 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:59:17.233835 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:59:17.233847 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:59:17.233855 | orchestrator | 2026-01-02 00:59:17.233863 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2026-01-02 00:59:17.233871 | orchestrator | Friday 02 January 2026 00:58:41 +0000 (0:00:00.352) 0:00:27.183 ******** 2026-01-02 00:59:17.233879 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-01-02 00:59:17.233887 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-01-02 00:59:17.233894 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-01-02 00:59:17.233902 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-01-02 00:59:17.233910 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-01-02 00:59:17.233919 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-01-02 00:59:17.233926 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-01-02 00:59:17.233935 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-01-02 00:59:17.233943 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-01-02 00:59:17.233951 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-01-02 00:59:17.233958 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-01-02 00:59:17.233966 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-01-02 00:59:17.233974 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-01-02 00:59:17.233982 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-01-02 00:59:17.233998 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-01-02 00:59:17.234006 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-01-02 00:59:17.234053 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-01-02 00:59:17.234064 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-01-02 00:59:17.234080 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-01-02 00:59:17.234088 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-01-02 00:59:17.234096 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-01-02 00:59:17.234104 | orchestrator | 2026-01-02 00:59:17.234112 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2026-01-02 00:59:17.234120 | orchestrator | Friday 02 January 2026 00:58:50 +0000 (0:00:09.262) 0:00:36.445 ******** 2026-01-02 00:59:17.234128 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-01-02 00:59:17.234136 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-01-02 00:59:17.234144 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-01-02 00:59:17.234177 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-01-02 00:59:17.234186 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-01-02 00:59:17.234194 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-01-02 00:59:17.234202 | orchestrator | 2026-01-02 00:59:17.234210 | orchestrator | TASK [service-check-containers : keystone | Check containers] ****************** 2026-01-02 00:59:17.234218 | orchestrator | Friday 02 January 2026 00:58:53 +0000 (0:00:02.729) 0:00:39.175 ******** 2026-01-02 00:59:17.234227 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-01-02 00:59:17.234237 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-01-02 00:59:17.234256 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-01-02 00:59:17.234289 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-02 00:59:17.234299 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-02 00:59:17.234307 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-02 00:59:17.234315 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-02 00:59:17.234324 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-02 00:59:17.234338 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-02 00:59:17.234346 | orchestrator | 2026-01-02 00:59:17.234355 | orchestrator | TASK [service-check-containers : keystone | Notify handlers to restart containers] *** 2026-01-02 00:59:17.234363 | orchestrator | Friday 02 January 2026 00:58:55 +0000 (0:00:02.278) 0:00:41.454 ******** 2026-01-02 00:59:17.234395 | orchestrator | changed: [testbed-node-0] => { 2026-01-02 00:59:17.234408 | orchestrator |  "msg": "Notifying handlers" 2026-01-02 00:59:17.234417 | orchestrator | } 2026-01-02 00:59:17.234425 | orchestrator | changed: [testbed-node-1] => { 2026-01-02 00:59:17.234433 | orchestrator |  "msg": "Notifying handlers" 2026-01-02 00:59:17.234441 | orchestrator | } 2026-01-02 00:59:17.234449 | orchestrator | changed: [testbed-node-2] => { 2026-01-02 00:59:17.234457 | orchestrator |  "msg": "Notifying handlers" 2026-01-02 00:59:17.234465 | orchestrator | } 2026-01-02 00:59:17.234473 | orchestrator | 2026-01-02 00:59:17.234481 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-01-02 00:59:17.234489 | orchestrator | Friday 02 January 2026 00:58:55 +0000 (0:00:00.327) 0:00:41.781 ******** 2026-01-02 00:59:17.234504 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-01-02 00:59:17.234514 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-02 00:59:17.234523 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-02 00:59:17.234536 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:59:17.234545 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-01-02 00:59:17.234558 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-02 00:59:17.234574 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-02 00:59:17.234583 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:59:17.234591 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-01-02 00:59:17.234600 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-02 00:59:17.234614 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-02 00:59:17.234622 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:59:17.234630 | orchestrator | 2026-01-02 00:59:17.234639 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-01-02 00:59:17.234647 | orchestrator | Friday 02 January 2026 00:58:56 +0000 (0:00:00.932) 0:00:42.714 ******** 2026-01-02 00:59:17.234655 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:59:17.234663 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:59:17.234671 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:59:17.234679 | orchestrator | 2026-01-02 00:59:17.234687 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2026-01-02 00:59:17.234695 | orchestrator | Friday 02 January 2026 00:58:57 +0000 (0:00:00.323) 0:00:43.038 ******** 2026-01-02 00:59:17.234703 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:59:17.234711 | orchestrator | 2026-01-02 00:59:17.234719 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2026-01-02 00:59:17.234727 | orchestrator | Friday 02 January 2026 00:58:59 +0000 (0:00:02.506) 0:00:45.544 ******** 2026-01-02 00:59:17.234735 | orchestrator | changed: [testbed-node-0] 2026-01-02 00:59:17.234742 | orchestrator | 2026-01-02 00:59:17.234750 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2026-01-02 00:59:17.234763 | orchestrator | Friday 02 January 2026 00:59:02 +0000 (0:00:02.429) 0:00:47.973 ******** 2026-01-02 00:59:17.234771 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:59:17.234779 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:59:17.234787 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:59:17.234795 | orchestrator | 2026-01-02 00:59:17.234803 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2026-01-02 00:59:17.234811 | orchestrator | Friday 02 January 2026 00:59:03 +0000 (0:00:00.973) 0:00:48.946 ******** 2026-01-02 00:59:17.234818 | orchestrator | ok: [testbed-node-0] 2026-01-02 00:59:17.234826 | orchestrator | ok: [testbed-node-1] 2026-01-02 00:59:17.234834 | orchestrator | ok: [testbed-node-2] 2026-01-02 00:59:17.234842 | orchestrator | 2026-01-02 00:59:17.234850 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2026-01-02 00:59:17.234858 | orchestrator | Friday 02 January 2026 00:59:03 +0000 (0:00:00.340) 0:00:49.287 ******** 2026-01-02 00:59:17.234866 | orchestrator | skipping: [testbed-node-0] 2026-01-02 00:59:17.234875 | orchestrator | skipping: [testbed-node-1] 2026-01-02 00:59:17.234883 | orchestrator | skipping: [testbed-node-2] 2026-01-02 00:59:17.234891 | orchestrator | 2026-01-02 00:59:17.234903 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2026-01-02 00:59:17.234911 | orchestrator | Friday 02 January 2026 00:59:03 +0000 (0:00:00.565) 0:00:49.852 ******** 2026-01-02 00:59:17.235078 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": true, "msg": "Container exited with non-zero return code 1", "rc": 1, "stderr": "+ sudo -E kolla_set_configs\n2026-01-02 00:59:05.525 INFO Loading config file at /var/lib/kolla/config_files/config.json\n2026-01-02 00:59:05.525 INFO Validating config file\n2026-01-02 00:59:05.525 INFO Kolla config strategy set to: COPY_ALWAYS\n2026-01-02 00:59:05.530 INFO Copying service configuration files\n2026-01-02 00:59:05.530 INFO Copying /var/lib/kolla/config_files/keystone-startup.sh to /usr/bin/keystone-startup.sh\n2026-01-02 00:59:05.538 INFO Setting permission for /usr/bin/keystone-startup.sh\n2026-01-02 00:59:05.538 INFO Copying /var/lib/kolla/config_files/keystone.conf to /etc/keystone/keystone.conf\n2026-01-02 00:59:05.539 INFO Setting permission for /etc/keystone/keystone.conf\n2026-01-02 00:59:05.539 INFO Copying /var/lib/kolla/config_files/wsgi-keystone.conf to /etc/apache2/conf-enabled/wsgi-keystone.conf\n2026-01-02 00:59:05.546 INFO Setting permission for /etc/apache2/conf-enabled/wsgi-keystone.conf\n2026-01-02 00:59:05.547 INFO Creating directory /var/lib/kolla/share/ca-certificates\n2026-01-02 00:59:05.547 INFO Setting permission for /var/lib/kolla/share/ca-certificates\n2026-01-02 00:59:05.547 INFO Copying /var/lib/kolla/config_files/ca-certificates/testbed.crt to /var/lib/kolla/share/ca-certificates/testbed.crt\n2026-01-02 00:59:05.547 INFO Setting permission for /var/lib/kolla/share/ca-certificates/testbed.crt\n2026-01-02 00:59:05.547 INFO Writing out command to execute\n2026-01-02 00:59:05.547 INFO Setting permission for /var/log/kolla\n2026-01-02 00:59:05.548 INFO Setting permission for /etc/keystone/fernet-keys\n++ cat /run_command\n+ CMD=/usr/bin/keystone-startup.sh\n+ ARGS=\n+ sudo kolla_copy_cacerts\nrehash: warning: skipping ca-certificates.crt,it does not contain exactly one certificate or CRL\n+ sudo kolla_install_projects\n+ [[ ! -n '' ]]\n+ . kolla_extend_start\n++ KEYSTONE_LOG_DIR=/var/log/kolla/keystone\n++ [[ ! -d /var/log/kolla/keystone ]]\n++ mkdir -p /var/log/kolla/keystone\n+++ stat -c %U:%G /var/log/kolla/keystone\n++ [[ root:kolla != \\k\\e\\y\\s\\t\\o\\n\\e\\:\\k\\o\\l\\l\\a ]]\n++ chown keystone:kolla /var/log/kolla/keystone\n++ '[' '!' -f /var/log/kolla/keystone/keystone.log ']'\n++ touch /var/log/kolla/keystone/keystone.log\n+++ stat -c %U:%G /var/log/kolla/keystone/keystone.log\n++ [[ root:kolla != \\k\\e\\y\\s\\t\\o\\n\\e\\:\\k\\e\\y\\s\\t\\o\\n\\e ]]\n++ chown keystone:keystone /var/log/kolla/keystone/keystone.log\n+++ stat -c %a /var/log/kolla/keystone\n++ [[ 2755 != \\7\\5\\5 ]]\n++ chmod 755 /var/log/kolla/keystone\n++ EXTRA_KEYSTONE_MANAGE_ARGS=\n++ [[ -n '' ]]\n++ [[ -n '' ]]\n++ [[ -n 0 ]]\n++ sudo -H -u keystone keystone-manage db_sync\n2026-01-02 00:59:15.204 1081 DEBUG oslo_db.sqlalchemy.engines [-] MySQL server mode set to STRICT_TRANS_TABLES,STRICT_ALL_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,TRADITIONAL,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION _check_effective_sql_mode /var/lib/kolla/venv/lib/python3.12/site-packages/oslo_db/sqlalchemy/engines.py:397\n2026-01-02 00:59:15.211 1081 CRITICAL keystone [-] Unhandled error: sqlalchemy.exc.OperationalError: (pymysql.err.OperationalError) (1193, \"Unknown system variable 'transaction_isolation'\")\n(Background on this error at: https://sqlalche.me/e/20/e3q8)\n2026-01-02 00:59:15.211 1081 ERROR keystone Traceback (most recent call last):\n2026-01-02 00:59:15.211 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py\", line 146, in __init__\n2026-01-02 00:59:15.211 1081 ERROR keystone self._dbapi_connection = engine.raw_connection()\n2026-01-02 00:59:15.211 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-02 00:59:15.211 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py\", line 3298, in raw_connection\n2026-01-02 00:59:15.211 1081 ERROR keystone return self.pool.connect()\n2026-01-02 00:59:15.211 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^\n2026-01-02 00:59:15.211 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 449, in connect\n2026-01-02 00:59:15.211 1081 ERROR keystone return _ConnectionFairy._checkout(self)\n2026-01-02 00:59:15.211 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-02 00:59:15.211 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 1263, in _checkout\n2026-01-02 00:59:15.211 1081 ERROR keystone fairy = _ConnectionRecord.checkout(pool)\n2026-01-02 00:59:15.211 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-02 00:59:15.211 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 712, in checkout\n2026-01-02 00:59:15.211 1081 ERROR keystone rec = pool._do_get()\n2026-01-02 00:59:15.211 1081 ERROR keystone ^^^^^^^^^^^^^^\n2026-01-02 00:59:15.211 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/impl.py\", line 179, in _do_get\n2026-01-02 00:59:15.211 1081 ERROR keystone with util.safe_reraise():\n2026-01-02 00:59:15.211 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/util/langhelpers.py\", line 146, in __exit__\n2026-01-02 00:59:15.211 1081 ERROR keystone raise exc_value.with_traceback(exc_tb)\n2026-01-02 00:59:15.211 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/impl.py\", line 177, in _do_get\n2026-01-02 00:59:15.211 1081 ERROR keystone return self._create_connection()\n2026-01-02 00:59:15.211 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-02 00:59:15.211 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 390, in _create_connection\n2026-01-02 00:59:15.211 1081 ERROR keystone return _ConnectionRecord(self)\n2026-01-02 00:59:15.211 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-02 00:59:15.211 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 674, in __init__\n2026-01-02 00:59:15.211 1081 ERROR keystone self.__connect()\n2026-01-02 00:59:15.211 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 914, in __connect\n2026-01-02 00:59:15.211 1081 ERROR keystone )._exec_w_sync_on_first_run(self.dbapi_connection, self)\n2026-01-02 00:59:15.211 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-02 00:59:15.211 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/event/attr.py\", line 483, in _exec_w_sync_on_first_run\n2026-01-02 00:59:15.211 1081 ERROR keystone self(*args, **kw)\n2026-01-02 00:59:15.211 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/event/attr.py\", line 497, in __call__\n2026-01-02 00:59:15.211 1081 ERROR keystone fn(*args, **kw)\n2026-01-02 00:59:15.211 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/util/langhelpers.py\", line 1916, in go\n2026-01-02 00:59:15.211 1081 ERROR keystone return once_fn(*arg, **kw)\n2026-01-02 00:59:15.211 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^\n2026-01-02 00:59:15.211 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/create.py\", line 752, in first_connect\n2026-01-02 00:59:15.211 1081 ERROR keystone dialect.initialize(c)\n2026-01-02 00:59:15.211 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/dialects/mysql/base.py\", line 2898, in initialize\n2026-01-02 00:59:15.211 1081 ERROR keystone default.DefaultDialect.initialize(self, connection)\n2026-01-02 00:59:15.211 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/default.py\", line 533, in initialize\n2026-01-02 00:59:15.211 1081 ERROR keystone self.default_isolation_level = self.get_default_isolation_level(\n2026-01-02 00:59:15.211 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-02 00:59:15.211 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/default.py\", line 584, in get_default_isolation_level\n2026-01-02 00:59:15.211 1081 ERROR keystone return self.get_isolation_level(dbapi_conn)\n2026-01-02 00:59:15.211 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-02 00:59:15.211 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/dialects/mysql/base.py\", line 2603, in get_isolation_level\n2026-01-02 00:59:15.211 1081 ERROR keystone cursor.execute(\"SELECT @@transaction_isolation\")\n2026-01-02 00:59:15.211 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/cursors.py\", line 153, in execute\n2026-01-02 00:59:15.211 1081 ERROR keystone result = self._query(query)\n2026-01-02 00:59:15.211 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^\n2026-01-02 00:59:15.211 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/cursors.py\", line 322, in _query\n2026-01-02 00:59:15.211 1081 ERROR keystone conn.query(q)\n2026-01-02 00:59:15.211 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/connections.py\", line 563, in query\n2026-01-02 00:59:15.211 1081 ERROR keystone self._affected_rows = self._read_query_result(unbuffered=unbuffered)\n2026-01-02 00:59:15.211 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-02 00:59:15.211 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/connections.py\", line 825, in _read_query_result\n2026-01-02 00:59:15.211 1081 ERROR keystone result.read()\n2026-01-02 00:59:15.211 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/connections.py\", line 1199, in read\n2026-01-02 00:59:15.211 1081 ERROR keystone first_packet = self.connection._read_packet()\n2026-01-02 00:59:15.211 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-02 00:59:15.211 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/connections.py\", line 775, in _read_packet\n2026-01-02 00:59:15.211 1081 ERROR keystone packet.raise_for_error()\n2026-01-02 00:59:15.211 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/protocol.py\", line 219, in raise_for_error\n2026-01-02 00:59:15.211 1081 ERROR keystone err.raise_mysql_exception(self._data)\n2026-01-02 00:59:15.211 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/err.py\", line 150, in raise_mysql_exception\n2026-01-02 00:59:15.211 1081 ERROR keystone raise errorclass(errno, errval)\n2026-01-02 00:59:15.211 1081 ERROR keystone pymysql.err.OperationalError: (1193, \"Unknown system variable 'transaction_isolation'\")\n2026-01-02 00:59:15.211 1081 ERROR keystone \n2026-01-02 00:59:15.211 1081 ERROR keystone The above exception was the direct cause of the following exception:\n2026-01-02 00:59:15.211 1081 ERROR keystone \n2026-01-02 00:59:15.211 1081 ERROR keystone Traceback (most recent call last):\n2026-01-02 00:59:15.211 1081 ERROR keystone File \"/var/lib/kolla/venv/bin/keystone-manage\", line 7, in \n2026-01-02 00:59:15.211 1081 ERROR keystone sys.exit(main())\n2026-01-02 00:59:15.211 1081 ERROR keystone ^^^^^^\n2026-01-02 00:59:15.211 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/keystone/cmd/manage.py\", line 36, in main\n2026-01-02 00:59:15.211 1081 ERROR keystone cli.main(argv=sys.argv, developer_config_file=developer_config)\n2026-01-02 00:59:15.211 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/keystone/cmd/cli.py\", line 1727, in main\n2026-01-02 00:59:15.211 1081 ERROR keystone CONF.command.cmd_class.main()\n2026-01-02 00:59:15.211 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/keystone/cmd/cli.py\", line 492, in main\n2026-01-02 00:59:15.211 1081 ERROR keystone upgrades.offline_sync_database_to_version(CONF.command.version)\n2026-01-02 00:59:15.211 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/keystone/common/sql/upgrades.py\", line 321, in offline_sync_database_to_version\n2026-01-02 00:59:15.211 1081 ERROR keystone _db_sync(engine=engine)\n2026-01-02 00:59:15.211 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/keystone/common/sql/upgrades.py\", line 210, in _db_sync\n2026-01-02 00:59:15.211 1081 ERROR keystone with sql.session_for_write() as session:\n2026-01-02 00:59:15.211 1081 ERROR keystone File \"/usr/lib/python3.12/contextlib.py\", line 137, in __enter__\n2026-01-02 00:59:15.211 1081 ERROR keystone return next(self.gen)\n2026-01-02 00:59:15.211 1081 ERROR keystone ^^^^^^^^^^^^^^\n2026-01-02 00:59:15.211 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/oslo_db/sqlalchemy/enginefacade.py\", line 1199, in _transaction_scope\n2026-01-02 00:59:15.211 1081 ERROR keystone with current._produce_block(\n2026-01-02 00:59:15.211 1081 ERROR keystone File \"/usr/lib/python3.12/contextlib.py\", line 137, in __enter__\n2026-01-02 00:59:15.211 1081 ERROR keystone return next(self.gen)\n2026-01-02 00:59:15.211 1081 ERROR keystone ^^^^^^^^^^^^^^\n2026-01-02 00:59:15.211 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/oslo_db/sqlalchemy/enginefacade.py\", line 841, in _session\n2026-01-02 00:59:15.211 1081 ERROR keystone self.session = self.factory._create_session(\n2026-01-02 00:59:15.211 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-02 00:59:15.211 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/oslo_db/sqlalchemy/enginefacade.py\", line 459, in _create_session\n2026-01-02 00:59:15.211 1081 ERROR keystone self._start()\n2026-01-02 00:59:15.211 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/oslo_db/sqlalchemy/enginefacade.py\", line 530, in _start\n2026-01-02 00:59:15.211 1081 ERROR keystone self._setup_for_connection(\n2026-01-02 00:59:15.211 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/oslo_db/sqlalchemy/enginefacade.py\", line 647, in _setup_for_connection\n2026-01-02 00:59:15.211 1081 ERROR keystone engine = engines.create_engine(\n2026-01-02 00:59:15.211 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^\n2026-01-02 00:59:15.211 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/debtcollector/renames.py\", line 41, in decorator\n2026-01-02 00:59:15.211 1081 ERROR keystone return wrapped(*args, **kwargs)\n2026-01-02 00:59:15.211 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-02 00:59:15.211 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/oslo_db/sqlalchemy/engines.py\", line 271, in create_engine\n2026-01-02 00:59:15.211 1081 ERROR keystone _test_connection(engine_event_target, max_retries, retry_interval)\n2026-01-02 00:59:15.211 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/oslo_db/sqlalchemy/engines.py\", line 169, in _test_connection\n2026-01-02 00:59:15.211 1081 ERROR keystone conn = engine.connect()\n2026-01-02 00:59:15.211 1081 ERROR keystone ^^^^^^^^^^^^^^^^\n2026-01-02 00:59:15.211 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py\", line 3274, in connect\n2026-01-02 00:59:15.211 1081 ERROR keystone return self._connection_cls(self)\n2026-01-02 00:59:15.211 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-02 00:59:15.211 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py\", line 148, in __init__\n2026-01-02 00:59:15.211 1081 ERROR keystone Connection._handle_dbapi_exception_noconnection(\n2026-01-02 00:59:15.211 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py\", line 2436, in _handle_dbapi_exception_noconnection\n2026-01-02 00:59:15.211 1081 ERROR keystone raise newraise.with_traceback(exc_info[2]) from e\n2026-01-02 00:59:15.211 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py\", line 146, in __init__\n2026-01-02 00:59:15.211 1081 ERROR keystone self._dbapi_connection = engine.raw_connection()\n2026-01-02 00:59:15.211 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-02 00:59:15.211 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py\", line 3298, in raw_connection\n2026-01-02 00:59:15.211 1081 ERROR keystone return self.pool.connect()\n2026-01-02 00:59:15.211 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^\n2026-01-02 00:59:15.211 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 449, in connect\n2026-01-02 00:59:15.211 1081 ERROR keystone return _ConnectionFairy._checkout(self)\n2026-01-02 00:59:15.211 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-02 00:59:15.211 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 1263, in _checkout\n2026-01-02 00:59:15.211 1081 ERROR keystone fairy = _ConnectionRecord.checkout(pool)\n2026-01-02 00:59:15.211 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-02 00:59:15.211 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 712, in checkout\n2026-01-02 00:59:15.211 1081 ERROR keystone rec = pool._do_get()\n2026-01-02 00:59:15.211 1081 ERROR keystone ^^^^^^^^^^^^^^\n2026-01-02 00:59:15.211 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/impl.py\", line 179, in _do_get\n2026-01-02 00:59:15.211 1081 ERROR keystone with util.safe_reraise():\n2026-01-02 00:59:15.211 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/util/langhelpers.py\", line 146, in __exit__\n2026-01-02 00:59:15.211 1081 ERROR keystone raise exc_value.with_traceback(exc_tb)\n2026-01-02 00:59:15.211 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/impl.py\", line 177, in _do_get\n2026-01-02 00:59:15.211 1081 ERROR keystone return self._create_connection()\n2026-01-02 00:59:15.211 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-02 00:59:15.211 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 390, in _create_connection\n2026-01-02 00:59:15.211 1081 ERROR keystone return _ConnectionRecord(self)\n2026-01-02 00:59:15.211 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-02 00:59:15.211 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 674, in __init__\n2026-01-02 00:59:15.211 1081 ERROR keystone self.__connect()\n2026-01-02 00:59:15.211 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 914, in __connect\n2026-01-02 00:59:15.211 1081 ERROR keystone )._exec_w_sync_on_first_run(self.dbapi_connection, self)\n2026-01-02 00:59:15.211 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-02 00:59:15.211 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/event/attr.py\", line 483, in _exec_w_sync_on_first_run\n2026-01-02 00:59:15.211 1081 ERROR keystone self(*args, **kw)\n2026-01-02 00:59:15.211 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/event/attr.py\", line 497, in __call__\n2026-01-02 00:59:15.211 1081 ERROR keystone fn(*args, **kw)\n2026-01-02 00:59:15.211 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/util/langhelpers.py\", line 1916, in go\n2026-01-02 00:59:15.211 1081 ERROR keystone return once_fn(*arg, **kw)\n2026-01-02 00:59:15.211 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^\n2026-01-02 00:59:15.211 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/create.py\", line 752, in first_connect\n2026-01-02 00:59:15.211 1081 ERROR keystone dialect.initialize(c)\n2026-01-02 00:59:15.211 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/dialects/mysql/base.py\", line 2898, in initialize\n2026-01-02 00:59:15.211 1081 ERROR keystone default.DefaultDialect.initialize(self, connection)\n2026-01-02 00:59:15.211 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/default.py\", line 533, in initialize\n2026-01-02 00:59:15.211 1081 ERROR keystone self.default_isolation_level = self.get_default_isolation_level(\n2026-01-02 00:59:15.211 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-02 00:59:15.211 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/default.py\", line 584, in get_default_isolation_level\n2026-01-02 00:59:15.211 1081 ERROR keystone return self.get_isolation_level(dbapi_conn)\n2026-01-02 00:59:15.211 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-02 00:59:15.211 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/dialects/mysql/base.py\", line 2603, in get_isolation_level\n2026-01-02 00:59:15.211 1081 ERROR keystone cursor.execute(\"SELECT @@transaction_isolation\")\n2026-01-02 00:59:15.211 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/cursors.py\", line 153, in execute\n2026-01-02 00:59:15.211 1081 ERROR keystone result = self._query(query)\n2026-01-02 00:59:15.211 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^\n2026-01-02 00:59:15.211 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/cursors.py\", line 322, in _query\n2026-01-02 00:59:15.211 1081 ERROR keystone conn.query(q)\n2026-01-02 00:59:15.211 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/connections.py\", line 563, in query\n2026-01-02 00:59:15.211 1081 ERROR keystone self._affected_rows = self._read_query_result(unbuffered=unbuffered)\n2026-01-02 00:59:15.211 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-02 00:59:15.211 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/connections.py\", line 825, in _read_query_result\n2026-01-02 00:59:15.211 1081 ERROR keystone result.read()\n2026-01-02 00:59:15.211 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/connections.py\", line 1199, in read\n2026-01-02 00:59:15.211 1081 ERROR keystone first_packet = self.connection._read_packet()\n2026-01-02 00:59:15.211 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-02 00:59:15.211 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/connections.py\", line 775, in _read_packet\n2026-01-02 00:59:15.211 1081 ERROR keystone packet.raise_for_error()\n2026-01-02 00:59:15.211 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/protocol.py\", line 219, in raise_for_error\n2026-01-02 00:59:15.211 1081 ERROR keystone err.raise_mysql_exception(self._data)\n2026-01-02 00:59:15.211 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/err.py\", line 150, in raise_mysql_exception\n2026-01-02 00:59:15.211 1081 ERROR keystone raise errorclass(errno, errval)\n2026-01-02 00:59:15.211 1081 ERROR keystone sqlalchemy.exc.OperationalError: (pymysql.err.OperationalError) (1193, \"Unknown system variable 'transaction_isolation'\")\n2026-01-02 00:59:15.211 1081 ERROR keystone (Background on this error at: https://sqlalche.me/e/20/e3q8)\n2026-01-02 00:59:15.211 1081 ERROR keystone \n", "stderr_lines": ["+ sudo -E kolla_set_configs", "2026-01-02 00:59:05.525 INFO Loading config file at /var/lib/kolla/config_files/config.json", "2026-01-02 00:59:05.525 INFO Validating config file", "2026-01-02 00:59:05.525 INFO Kolla config strategy set to: COPY_ALWAYS", "2026-01-02 00:59:05.530 INFO Copying service configuration files", "2026-01-02 00:59:05.530 INFO Copying /var/lib/kolla/config_files/keystone-startup.sh to /usr/bin/keystone-startup.sh", "2026-01-02 00:59:05.538 INFO Setting permission for /usr/bin/keystone-startup.sh", "2026-01-02 00:59:05.538 INFO Copying /var/lib/kolla/config_files/keystone.conf to /etc/keystone/keystone.conf", "2026-01-02 00:59:05.539 INFO Setting permission for /etc/keystone/keystone.conf", "2026-01-02 00:59:05.539 INFO Copying /var/lib/kolla/config_files/wsgi-keystone.conf to /etc/apache2/conf-enabled/wsgi-keystone.conf", "2026-01-02 00:59:05.546 INFO Setting permission for /etc/apache2/conf-enabled/wsgi-keystone.conf", "2026-01-02 00:59:05.547 INFO Creating directory /var/lib/kolla/share/ca-certificates", "2026-01-02 00:59:05.547 INFO Setting permission for /var/lib/kolla/share/ca-certificates", "2026-01-02 00:59:05.547 INFO Copying /var/lib/kolla/config_files/ca-certificates/testbed.crt to /var/lib/kolla/share/ca-certificates/testbed.crt", "2026-01-02 00:59:05.547 INFO Setting permission for /var/lib/kolla/share/ca-certificates/testbed.crt", "2026-01-02 00:59:05.547 INFO Writing out command to execute", "2026-01-02 00:59:05.547 INFO Setting permission for /var/log/kolla", "2026-01-02 00:59:05.548 INFO Setting permission for /etc/keystone/fernet-keys", "++ cat /run_command", "+ CMD=/usr/bin/keystone-startup.sh", "+ ARGS=", "+ sudo kolla_copy_cacerts", "rehash: warning: skipping ca-certificates.crt,it does not contain exactly one certificate or CRL", "+ sudo kolla_install_projects", "+ [[ ! -n '' ]]", "+ . kolla_extend_start", "++ KEYSTONE_LOG_DIR=/var/log/kolla/keystone", "++ [[ ! -d /var/log/kolla/keystone ]]", "++ mkdir -p /var/log/kolla/keystone", "+++ stat -c %U:%G /var/log/kolla/keystone", "++ [[ root:kolla != \\k\\e\\y\\s\\t\\o\\n\\e\\:\\k\\o\\l\\l\\a ]]", "++ chown keystone:kolla /var/log/kolla/keystone", "++ '[' '!' -f /var/log/kolla/keystone/keystone.log ']'", "++ touch /var/log/kolla/keystone/keystone.log", "+++ stat -c %U:%G /var/log/kolla/keystone/keystone.log", "++ [[ root:kolla != \\k\\e\\y\\s\\t\\o\\n\\e\\:\\k\\e\\y\\s\\t\\o\\n\\e ]]", "++ chown keystone:keystone /var/log/kolla/keystone/keystone.log", "+++ stat -c %a /var/log/kolla/keystone", "++ [[ 2755 != \\7\\5\\5 ]]", "++ chmod 755 /var/log/kolla/keystone", "++ EXTRA_KEYSTONE_MANAGE_ARGS=", "++ [[ -n '' ]]", "++ [[ -n '' ]]", "++ [[ -n 0 ]]", "++ sudo -H -u keystone keystone-manage db_sync", "2026-01-02 00:59:15.204 1081 DEBUG oslo_db.sqlalchemy.engines [-] MySQL server mode set to STRICT_TRANS_TABLES,STRICT_ALL_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,TRADITIONAL,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION _check_effective_sql_mode /var/lib/kolla/venv/lib/python3.12/site-packages/oslo_db/sqlalchemy/engines.py:397", "2026-01-02 00:59:15.211 1081 CRITICAL keystone [-] Unhandled error: sqlalchemy.exc.OperationalError: (pymysql.err.OperationalError) (1193, \"Unknown system variable 'transaction_isolation'\")", "(Background on this error at: https://sqlalche.me/e/20/e3q8)", "2026-01-02 00:59:15.211 1081 ERROR keystone Traceback (most recent call last):", "2026-01-02 00:59:15.211 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py\", line 146, in __init__", "2026-01-02 00:59:15.211 1081 ERROR keystone self._dbapi_connection = engine.raw_connection()", "2026-01-02 00:59:15.211 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-02 00:59:15.211 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py\", line 3298, in raw_connection", "2026-01-02 00:59:15.211 1081 ERROR keystone return self.pool.connect()", "2026-01-02 00:59:15.211 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^", "2026-01-02 00:59:15.211 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 449, in connect", "2026-01-02 00:59:15.211 1081 ERROR keystone return _ConnectionFairy._checkout(self)", "2026-01-02 00:59:15.211 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-02 00:59:15.211 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 1263, in _checkout", "2026-01-02 00:59:15.211 1081 ERROR keystone fairy = _ConnectionRecord.checkout(pool)", "2026-01-02 00:59:15.211 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-02 00:59:15.211 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 712, in checkout", "2026-01-02 00:59:15.211 1081 ERROR keystone rec = pool._do_get()", "2026-01-02 00:59:15.211 1081 ERROR keystone ^^^^^^^^^^^^^^", "2026-01-02 00:59:15.211 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/impl.py\", line 179, in _do_get", "2026-01-02 00:59:15.211 1081 ERROR keystone with util.safe_reraise():", "2026-01-02 00:59:15.211 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/util/langhelpers.py\", line 146, in __exit__", "2026-01-02 00:59:15.211 1081 ERROR keystone raise exc_value.with_traceback(exc_tb)", "2026-01-02 00:59:15.211 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/impl.py\", line 177, in _do_get", "2026-01-02 00:59:15.211 1081 ERROR keystone return self._create_connection()", "2026-01-02 00:59:15.211 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-02 00:59:15.211 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 390, in _create_connection", "2026-01-02 00:59:15.211 1081 ERROR keystone return _ConnectionRecord(self)", "2026-01-02 00:59:15.211 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-02 00:59:15.211 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 674, in __init__", "2026-01-02 00:59:15.211 1081 ERROR keystone self.__connect()", "2026-01-02 00:59:15.211 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 914, in __connect", "2026-01-02 00:59:15.211 1081 ERROR keystone )._exec_w_sync_on_first_run(self.dbapi_connection, self)", "2026-01-02 00:59:15.211 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-02 00:59:15.211 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/event/attr.py\", line 483, in _exec_w_sync_on_first_run", "2026-01-02 00:59:15.211 1081 ERROR keystone self(*args, **kw)", "2026-01-02 00:59:15.211 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/event/attr.py\", line 497, in __call__", "2026-01-02 00:59:15.211 1081 ERROR keystone fn(*args, **kw)", "2026-01-02 00:59:15.211 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/util/langhelpers.py\", line 1916, in go", "2026-01-02 00:59:15.211 1081 ERROR keystone return once_fn(*arg, **kw)", "2026-01-02 00:59:15.211 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^", "2026-01-02 00:59:15.211 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/create.py\", line 752, in first_connect", "2026-01-02 00:59:15.211 1081 ERROR keystone dialect.initialize(c)", "2026-01-02 00:59:15.211 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/dialects/mysql/base.py\", line 2898, in initialize", "2026-01-02 00:59:15.211 1081 ERROR keystone default.DefaultDialect.initialize(self, connection)", "2026-01-02 00:59:15.211 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/default.py\", line 533, in initialize", "2026-01-02 00:59:15.211 1081 ERROR keystone self.default_isolation_level = self.get_default_isolation_level(", "2026-01-02 00:59:15.211 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-02 00:59:15.211 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/default.py\", line 584, in get_default_isolation_level", "2026-01-02 00:59:15.211 1081 ERROR keystone return self.get_isolation_level(dbapi_conn)", "2026-01-02 00:59:15.211 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-02 00:59:15.211 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/dialects/mysql/base.py\", line 2603, in get_isolation_level", "2026-01-02 00:59:15.211 1081 ERROR keystone cursor.execute(\"SELECT @@transaction_isolation\")", "2026-01-02 00:59:15.211 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/cursors.py\", line 153, in execute", "2026-01-02 00:59:15.211 1081 ERROR keystone result = self._query(query)", "2026-01-02 00:59:15.211 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^", "2026-01-02 00:59:15.211 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/cursors.py\", line 322, in _query", "2026-01-02 00:59:15.211 1081 ERROR keystone conn.query(q)", "2026-01-02 00:59:15.211 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/connections.py\", line 563, in query", "2026-01-02 00:59:15.211 1081 ERROR keystone self._affected_rows = self._read_query_result(unbuffered=unbuffered)", "2026-01-02 00:59:15.211 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-02 00:59:15.211 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/connections.py\", line 825, in _read_query_result", "2026-01-02 00:59:15.211 1081 ERROR keystone result.read()", "2026-01-02 00:59:15.211 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/connections.py\", line 1199, in read", "2026-01-02 00:59:15.211 1081 ERROR keystone first_packet = self.connection._read_packet()", "2026-01-02 00:59:15.211 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-02 00:59:15.211 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/connections.py\", line 775, in _read_packet", "2026-01-02 00:59:15.211 1081 ERROR keystone packet.raise_for_error()", "2026-01-02 00:59:15.211 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/protocol.py\", line 219, in raise_for_error", "2026-01-02 00:59:15.211 1081 ERROR keystone err.raise_mysql_exception(self._data)", "2026-01-02 00:59:15.211 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/err.py\", line 150, in raise_mysql_exception", "2026-01-02 00:59:15.211 1081 ERROR keystone raise errorclass(errno, errval)", "2026-01-02 00:59:15.211 1081 ERROR keystone pymysql.err.OperationalError: (1193, \"Unknown system variable 'transaction_isolation'\")", "2026-01-02 00:59:15.211 1081 ERROR keystone ", "2026-01-02 00:59:15.211 1081 ERROR keystone The above exception was the direct cause of the following exception:", "2026-01-02 00:59:15.211 1081 ERROR keystone ", "2026-01-02 00:59:15.211 1081 ERROR keystone Traceback (most recent call last):", "2026-01-02 00:59:15.211 1081 ERROR keystone File \"/var/lib/kolla/venv/bin/keystone-manage\", line 7, in ", "2026-01-02 00:59:15.211 1081 ERROR keystone sys.exit(main())", "2026-01-02 00:59:15.211 1081 ERROR keystone ^^^^^^", "2026-01-02 00:59:15.211 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/keystone/cmd/manage.py\", line 36, in main", "2026-01-02 00:59:15.211 1081 ERROR keystone cli.main(argv=sys.argv, developer_config_file=developer_config)", "2026-01-02 00:59:15.211 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/keystone/cmd/cli.py\", line 1727, in main", "2026-01-02 00:59:15.211 1081 ERROR keystone CONF.command.cmd_class.main()", "2026-01-02 00:59:15.211 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/keystone/cmd/cli.py\", line 492, in main", "2026-01-02 00:59:15.211 1081 ERROR keystone upgrades.offline_sync_database_to_version(CONF.command.version)", "2026-01-02 00:59:15.211 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/keystone/common/sql/upgrades.py\", line 321, in offline_sync_database_to_version", "2026-01-02 00:59:15.211 1081 ERROR keystone _db_sync(engine=engine)", "2026-01-02 00:59:15.211 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/keystone/common/sql/upgrades.py\", line 210, in _db_sync", "2026-01-02 00:59:15.211 1081 ERROR keystone with sql.session_for_write() as session:", "2026-01-02 00:59:15.211 1081 ERROR keystone File \"/usr/lib/python3.12/contextlib.py\", line 137, in __enter__", "2026-01-02 00:59:15.211 1081 ERROR keystone return next(self.gen)", "2026-01-02 00:59:15.211 1081 ERROR keystone ^^^^^^^^^^^^^^", "2026-01-02 00:59:15.211 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/oslo_db/sqlalchemy/enginefacade.py\", line 1199, in _transaction_scope", "2026-01-02 00:59:15.211 1081 ERROR keystone with current._produce_block(", "2026-01-02 00:59:15.211 1081 ERROR keystone File \"/usr/lib/python3.12/contextlib.py\", line 137, in __enter__", "2026-01-02 00:59:15.211 1081 ERROR keystone return next(self.gen)", "2026-01-02 00:59:15.211 1081 ERROR keystone ^^^^^^^^^^^^^^", "2026-01-02 00:59:15.211 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/oslo_db/sqlalchemy/enginefacade.py\", line 841, in _session", "2026-01-02 00:59:15.211 1081 ERROR keystone self.session = self.factory._create_session(", "2026-01-02 00:59:15.211 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-02 00:59:15.211 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/oslo_db/sqlalchemy/enginefacade.py\", line 459, in _create_session", "2026-01-02 00:59:15.211 1081 ERROR keystone self._start()", "2026-01-02 00:59:15.211 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/oslo_db/sqlalchemy/enginefacade.py\", line 530, in _start", "2026-01-02 00:59:15.211 1081 ERROR keystone self._setup_for_connection(", "2026-01-02 00:59:15.211 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/oslo_db/sqlalchemy/enginefacade.py\", line 647, in _setup_for_connection", "2026-01-02 00:59:15.211 1081 ERROR keystone engine = engines.create_engine(", "2026-01-02 00:59:15.211 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^", "2026-01-02 00:59:15.211 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/debtcollector/renames.py\", line 41, in decorator", "2026-01-02 00:59:15.211 1081 ERROR keystone return wrapped(*args, **kwargs)", "2026-01-02 00:59:15.211 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-02 00:59:15.211 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/oslo_db/sqlalchemy/engines.py\", line 271, in create_engine", "2026-01-02 00:59:15.211 1081 ERROR keystone _test_connection(engine_event_target, max_retries, retry_interval)", "2026-01-02 00:59:15.211 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/oslo_db/sqlalchemy/engines.py\", line 169, in _test_connection", "2026-01-02 00:59:15.211 1081 ERROR keystone conn = engine.connect()", "2026-01-02 00:59:15.211 1081 ERROR keystone ^^^^^^^^^^^^^^^^", "2026-01-02 00:59:15.211 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py\", line 3274, in connect", "2026-01-02 00:59:15.211 1081 ERROR keystone return self._connection_cls(self)", "2026-01-02 00:59:15.211 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-02 00:59:15.211 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py\", line 148, in __init__", "2026-01-02 00:59:15.211 1081 ERROR keystone Connection._handle_dbapi_exception_noconnection(", "2026-01-02 00:59:15.211 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py\", line 2436, in _handle_dbapi_exception_noconnection", "2026-01-02 00:59:15.211 1081 ERROR keystone raise newraise.with_traceback(exc_info[2]) from e", "2026-01-02 00:59:15.211 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py\", line 146, in __init__", "2026-01-02 00:59:15.211 1081 ERROR keystone self._dbapi_connection = engine.raw_connection()", "2026-01-02 00:59:15.211 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-02 00:59:15.211 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py\", line 3298, in raw_connection", "2026-01-02 00:59:15.211 1081 ERROR keystone return self.pool.connect()", "2026-01-02 00:59:15.211 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^", "2026-01-02 00:59:15.211 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 449, in connect", "2026-01-02 00:59:15.211 1081 ERROR keystone return _ConnectionFairy._checkout(self)", "2026-01-02 00:59:15.211 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-02 00:59:15.211 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 1263, in _checkout", "2026-01-02 00:59:15.211 1081 ERROR keystone fairy = _ConnectionRecord.checkout(pool)", "2026-01-02 00:59:15.211 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-02 00:59:15.211 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 712, in checkout", "2026-01-02 00:59:15.211 1081 ERROR keystone rec = pool._do_get()", "2026-01-02 00:59:15.211 1081 ERROR keystone ^^^^^^^^^^^^^^", "2026-01-02 00:59:15.211 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/impl.py\", line 179, in _do_get", "2026-01-02 00:59:15.211 1081 ERROR keystone with util.safe_reraise():", "2026-01-02 00:59:15.211 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/util/langhelpers.py\", line 146, in __exit__", "2026-01-02 00:59:15.211 1081 ERROR keystone raise exc_value.with_traceback(exc_tb)", "2026-01-02 00:59:15.211 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/impl.py\", line 177, in _do_get", "2026-01-02 00:59:15.211 1081 ERROR keystone return self._create_connection()", "2026-01-02 00:59:15.211 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-02 00:59:15.211 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 390, in _create_connection", "2026-01-02 00:59:15.211 1081 ERROR keystone return _ConnectionRecord(self)", "2026-01-02 00:59:15.211 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-02 00:59:15.211 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 674, in __init__", "2026-01-02 00:59:15.211 1081 ERROR keystone self.__connect()", "2026-01-02 00:59:15.211 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 914, in __connect", "2026-01-02 00:59:15.211 1081 ERROR keystone )._exec_w_sync_on_first_run(self.dbapi_connection, self)", "2026-01-02 00:59:15.211 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-02 00:59:15.211 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/event/attr.py\", line 483, in _exec_w_sync_on_first_run", "2026-01-02 00:59:15.211 1081 ERROR keystone self(*args, **kw)", "2026-01-02 00:59:15.211 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/event/attr.py\", line 497, in __call__", "2026-01-02 00:59:15.211 1081 ERROR keystone fn(*args, **kw)", "2026-01-02 00:59:15.211 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/util/langhelpers.py\", line 1916, in go", "2026-01-02 00:59:15.211 1081 ERROR keystone return once_fn(*arg, **kw)", "2026-01-02 00:59:15.211 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^", "2026-01-02 00:59:15.211 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/create.py\", line 752, in first_connect", "2026-01-02 00:59:15.211 1081 ERROR keystone dialect.initialize(c)", "2026-01-02 00:59:15.211 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/dialects/mysql/base.py\", line 2898, in initialize", "2026-01-02 00:59:15.211 1081 ERROR keystone default.DefaultDialect.initialize(self, connection)", "2026-01-02 00:59:15.211 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/default.py\", line 533, in initialize", "2026-01-02 00:59:15.211 1081 ERROR keystone self.default_isolation_level = self.get_default_isolation_level(", "2026-01-02 00:59:15.211 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-02 00:59:15.211 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/default.py\", line 584, in get_default_isolation_level", "2026-01-02 00:59:15.211 1081 ERROR keystone return self.get_isolation_level(dbapi_conn)", "2026-01-02 00:59:15.211 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-02 00:59:15.211 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/dialects/mysql/base.py\", line 2603, in get_isolation_level", "2026-01-02 00:59:15.211 1081 ERROR keystone cursor.execute(\"SELECT @@transaction_isolation\")", "2026-01-02 00:59:15.211 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/cursors.py\", line 153, in execute", "2026-01-02 00:59:15.211 1081 ERROR keystone result = self._query(query)", "2026-01-02 00:59:15.211 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^", "2026-01-02 00:59:15.211 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/cursors.py\", line 322, in _query", "2026-01-02 00:59:15.211 1081 ERROR keystone conn.query(q)", "2026-01-02 00:59:15.211 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/connections.py\", line 563, in query", "2026-01-02 00:59:15.211 1081 ERROR keystone self._affected_rows = self._read_query_result(unbuffered=unbuffered)", "2026-01-02 00:59:15.211 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-02 00:59:15.211 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/connections.py\", line 825, in _read_query_result", "2026-01-02 00:59:15.211 1081 ERROR keystone result.read()", "2026-01-02 00:59:15.211 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/connections.py\", line 1199, in read", "2026-01-02 00:59:15.211 1081 ERROR keystone first_packet = self.connection._read_packet()", "2026-01-02 00:59:15.211 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-02 00:59:15.211 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/connections.py\", line 775, in _read_packet", "2026-01-02 00:59:15.211 1081 ERROR keystone packet.raise_for_error()", "2026-01-02 00:59:15.211 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/protocol.py\", line 219, in raise_for_error", "2026-01-02 00:59:15.211 1081 ERROR keystone err.raise_mysql_exception(self._data)", "2026-01-02 00:59:15.211 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/err.py\", line 150, in raise_mysql_exception", "2026-01-02 00:59:15.211 1081 ERROR keystone raise errorclass(errno, errval)", "2026-01-02 00:59:15.211 1081 ERROR keystone sqlalchemy.exc.OperationalError: (pymysql.err.OperationalError) (1193, \"Unknown system variable 'transaction_isolation'\")", "2026-01-02 00:59:15.211 1081 ERROR keystone (Background on this error at: https://sqlalche.me/e/20/e3q8)", "2026-01-02 00:59:15.211 1081 ERROR keystone "], "stdout": "Updating certificates in /etc/ssl/certs...\n1 added, 0 removed; done.\nRunning hooks in /etc/ca-certificates/update.d...\ndone.\n", "stdout_lines": ["Updating certificates in /etc/ssl/certs...", "1 added, 0 removed; done.", "Running hooks in /etc/ca-certificates/update.d...", "done."]} 2026-01-02 00:59:17.235158 | orchestrator | 2026-01-02 00:59:17.235167 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-02 00:59:17.235176 | orchestrator | testbed-node-0 : ok=22  changed=12  unreachable=0 failed=1  skipped=13  rescued=0 ignored=0 2026-01-02 00:59:17.235185 | orchestrator | testbed-node-1 : ok=18  changed=10  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-01-02 00:59:17.235194 | orchestrator | testbed-node-2 : ok=18  changed=10  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-01-02 00:59:17.235202 | orchestrator | 2026-01-02 00:59:17.235210 | orchestrator | 2026-01-02 00:59:17.235218 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-02 00:59:17.235226 | orchestrator | Friday 02 January 2026 00:59:16 +0000 (0:00:12.224) 0:01:02.077 ******** 2026-01-02 00:59:17.235234 | orchestrator | =============================================================================== 2026-01-02 00:59:17.235241 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 12.22s 2026-01-02 00:59:17.235249 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 9.26s 2026-01-02 00:59:17.235257 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 5.81s 2026-01-02 00:59:17.235265 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.41s 2026-01-02 00:59:17.235273 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.18s 2026-01-02 00:59:17.235281 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 2.73s 2026-01-02 00:59:17.235289 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.51s 2026-01-02 00:59:17.235297 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.43s 2026-01-02 00:59:17.235305 | orchestrator | service-check-containers : keystone | Check containers ------------------ 2.28s 2026-01-02 00:59:17.235313 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 1.80s 2026-01-02 00:59:17.235321 | orchestrator | keystone : Copying over wsgi-keystone.conf ------------------------------ 1.70s 2026-01-02 00:59:17.235328 | orchestrator | keystone : Copying keystone-startup script for keystone ----------------- 1.54s 2026-01-02 00:59:17.235336 | orchestrator | keystone : Generate the required cron jobs for the node ----------------- 1.02s 2026-01-02 00:59:17.235344 | orchestrator | keystone : Checking for any running keystone_fernet containers ---------- 0.97s 2026-01-02 00:59:17.235356 | orchestrator | service-check-containers : Include tasks -------------------------------- 0.93s 2026-01-02 00:59:17.235364 | orchestrator | keystone : Checking whether keystone-paste.ini file exists -------------- 0.91s 2026-01-02 00:59:17.235386 | orchestrator | keystone : Copying over keystone-paste.ini ------------------------------ 0.82s 2026-01-02 00:59:17.235400 | orchestrator | keystone : Check if Keystone domain-specific config is supplied --------- 0.78s 2026-01-02 00:59:17.235408 | orchestrator | service-cert-copy : keystone | Copying over backend internal TLS key ---- 0.75s 2026-01-02 00:59:17.235416 | orchestrator | keystone : Copying over existing policy file ---------------------------- 0.69s 2026-01-02 00:59:17.235424 | orchestrator | 2026-01-02 00:59:17 | INFO  | Task 53b94dfa-2853-4b9c-bdf4-59ee39731e37 is in state STARTED 2026-01-02 00:59:17.235437 | orchestrator | 2026-01-02 00:59:17 | INFO  | Task 08808bed-091a-4cc8-b86b-5d44d61a63ab is in state STARTED 2026-01-02 00:59:17.235446 | orchestrator | 2026-01-02 00:59:17 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:59:20.278842 | orchestrator | 2026-01-02 00:59:20 | INFO  | Task d7222188-efef-442c-a1d8-36ff38f7aceb is in state STARTED 2026-01-02 00:59:20.281591 | orchestrator | 2026-01-02 00:59:20 | INFO  | Task 5d5b70fc-e8e9-41c0-b3a1-1554840ffe8b is in state STARTED 2026-01-02 00:59:20.285759 | orchestrator | 2026-01-02 00:59:20 | INFO  | Task 53b94dfa-2853-4b9c-bdf4-59ee39731e37 is in state STARTED 2026-01-02 00:59:20.286548 | orchestrator | 2026-01-02 00:59:20 | INFO  | Task 461e371b-b47d-449a-ac0e-aad070881187 is in state STARTED 2026-01-02 00:59:20.288684 | orchestrator | 2026-01-02 00:59:20 | INFO  | Task 08808bed-091a-4cc8-b86b-5d44d61a63ab is in state STARTED 2026-01-02 00:59:20.288724 | orchestrator | 2026-01-02 00:59:20 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:59:23.324154 | orchestrator | 2026-01-02 00:59:23 | INFO  | Task d7222188-efef-442c-a1d8-36ff38f7aceb is in state STARTED 2026-01-02 00:59:23.326346 | orchestrator | 2026-01-02 00:59:23 | INFO  | Task 5d5b70fc-e8e9-41c0-b3a1-1554840ffe8b is in state STARTED 2026-01-02 00:59:23.328602 | orchestrator | 2026-01-02 00:59:23 | INFO  | Task 53b94dfa-2853-4b9c-bdf4-59ee39731e37 is in state STARTED 2026-01-02 00:59:23.330541 | orchestrator | 2026-01-02 00:59:23 | INFO  | Task 461e371b-b47d-449a-ac0e-aad070881187 is in state STARTED 2026-01-02 00:59:23.332029 | orchestrator | 2026-01-02 00:59:23 | INFO  | Task 08808bed-091a-4cc8-b86b-5d44d61a63ab is in state STARTED 2026-01-02 00:59:23.332231 | orchestrator | 2026-01-02 00:59:23 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:59:26.374632 | orchestrator | 2026-01-02 00:59:26 | INFO  | Task d7222188-efef-442c-a1d8-36ff38f7aceb is in state STARTED 2026-01-02 00:59:26.376557 | orchestrator | 2026-01-02 00:59:26 | INFO  | Task 5d5b70fc-e8e9-41c0-b3a1-1554840ffe8b is in state STARTED 2026-01-02 00:59:26.379453 | orchestrator | 2026-01-02 00:59:26 | INFO  | Task 53b94dfa-2853-4b9c-bdf4-59ee39731e37 is in state STARTED 2026-01-02 00:59:26.381575 | orchestrator | 2026-01-02 00:59:26 | INFO  | Task 461e371b-b47d-449a-ac0e-aad070881187 is in state STARTED 2026-01-02 00:59:26.382875 | orchestrator | 2026-01-02 00:59:26 | INFO  | Task 08808bed-091a-4cc8-b86b-5d44d61a63ab is in state STARTED 2026-01-02 00:59:26.383001 | orchestrator | 2026-01-02 00:59:26 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:59:29.432332 | orchestrator | 2026-01-02 00:59:29 | INFO  | Task d7222188-efef-442c-a1d8-36ff38f7aceb is in state STARTED 2026-01-02 00:59:29.434720 | orchestrator | 2026-01-02 00:59:29 | INFO  | Task 5d5b70fc-e8e9-41c0-b3a1-1554840ffe8b is in state STARTED 2026-01-02 00:59:29.436087 | orchestrator | 2026-01-02 00:59:29 | INFO  | Task 53b94dfa-2853-4b9c-bdf4-59ee39731e37 is in state STARTED 2026-01-02 00:59:29.438068 | orchestrator | 2026-01-02 00:59:29 | INFO  | Task 461e371b-b47d-449a-ac0e-aad070881187 is in state STARTED 2026-01-02 00:59:29.439775 | orchestrator | 2026-01-02 00:59:29 | INFO  | Task 08808bed-091a-4cc8-b86b-5d44d61a63ab is in state STARTED 2026-01-02 00:59:29.439834 | orchestrator | 2026-01-02 00:59:29 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:59:32.494450 | orchestrator | 2026-01-02 00:59:32 | INFO  | Task d7222188-efef-442c-a1d8-36ff38f7aceb is in state STARTED 2026-01-02 00:59:32.495649 | orchestrator | 2026-01-02 00:59:32 | INFO  | Task 5d5b70fc-e8e9-41c0-b3a1-1554840ffe8b is in state STARTED 2026-01-02 00:59:32.498235 | orchestrator | 2026-01-02 00:59:32 | INFO  | Task 53b94dfa-2853-4b9c-bdf4-59ee39731e37 is in state STARTED 2026-01-02 00:59:32.500264 | orchestrator | 2026-01-02 00:59:32 | INFO  | Task 461e371b-b47d-449a-ac0e-aad070881187 is in state STARTED 2026-01-02 00:59:32.502136 | orchestrator | 2026-01-02 00:59:32 | INFO  | Task 08808bed-091a-4cc8-b86b-5d44d61a63ab is in state STARTED 2026-01-02 00:59:32.502423 | orchestrator | 2026-01-02 00:59:32 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:59:35.542298 | orchestrator | 2026-01-02 00:59:35 | INFO  | Task d7222188-efef-442c-a1d8-36ff38f7aceb is in state STARTED 2026-01-02 00:59:35.544080 | orchestrator | 2026-01-02 00:59:35 | INFO  | Task 5d5b70fc-e8e9-41c0-b3a1-1554840ffe8b is in state STARTED 2026-01-02 00:59:35.545993 | orchestrator | 2026-01-02 00:59:35 | INFO  | Task 53b94dfa-2853-4b9c-bdf4-59ee39731e37 is in state STARTED 2026-01-02 00:59:35.547850 | orchestrator | 2026-01-02 00:59:35 | INFO  | Task 461e371b-b47d-449a-ac0e-aad070881187 is in state STARTED 2026-01-02 00:59:35.549985 | orchestrator | 2026-01-02 00:59:35 | INFO  | Task 08808bed-091a-4cc8-b86b-5d44d61a63ab is in state STARTED 2026-01-02 00:59:35.550050 | orchestrator | 2026-01-02 00:59:35 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:59:38.604901 | orchestrator | 2026-01-02 00:59:38 | INFO  | Task d7222188-efef-442c-a1d8-36ff38f7aceb is in state STARTED 2026-01-02 00:59:38.607967 | orchestrator | 2026-01-02 00:59:38 | INFO  | Task 5d5b70fc-e8e9-41c0-b3a1-1554840ffe8b is in state STARTED 2026-01-02 00:59:38.610311 | orchestrator | 2026-01-02 00:59:38 | INFO  | Task 53b94dfa-2853-4b9c-bdf4-59ee39731e37 is in state STARTED 2026-01-02 00:59:38.612640 | orchestrator | 2026-01-02 00:59:38 | INFO  | Task 461e371b-b47d-449a-ac0e-aad070881187 is in state STARTED 2026-01-02 00:59:38.614653 | orchestrator | 2026-01-02 00:59:38 | INFO  | Task 08808bed-091a-4cc8-b86b-5d44d61a63ab is in state STARTED 2026-01-02 00:59:38.614695 | orchestrator | 2026-01-02 00:59:38 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:59:41.667201 | orchestrator | 2026-01-02 00:59:41 | INFO  | Task d7222188-efef-442c-a1d8-36ff38f7aceb is in state STARTED 2026-01-02 00:59:41.668910 | orchestrator | 2026-01-02 00:59:41 | INFO  | Task 5d5b70fc-e8e9-41c0-b3a1-1554840ffe8b is in state STARTED 2026-01-02 00:59:41.670959 | orchestrator | 2026-01-02 00:59:41 | INFO  | Task 53b94dfa-2853-4b9c-bdf4-59ee39731e37 is in state STARTED 2026-01-02 00:59:41.672226 | orchestrator | 2026-01-02 00:59:41 | INFO  | Task 461e371b-b47d-449a-ac0e-aad070881187 is in state STARTED 2026-01-02 00:59:41.675005 | orchestrator | 2026-01-02 00:59:41 | INFO  | Task 08808bed-091a-4cc8-b86b-5d44d61a63ab is in state STARTED 2026-01-02 00:59:41.675058 | orchestrator | 2026-01-02 00:59:41 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:59:44.716835 | orchestrator | 2026-01-02 00:59:44 | INFO  | Task d7222188-efef-442c-a1d8-36ff38f7aceb is in state STARTED 2026-01-02 00:59:44.718941 | orchestrator | 2026-01-02 00:59:44 | INFO  | Task 5d5b70fc-e8e9-41c0-b3a1-1554840ffe8b is in state STARTED 2026-01-02 00:59:44.720657 | orchestrator | 2026-01-02 00:59:44 | INFO  | Task 53b94dfa-2853-4b9c-bdf4-59ee39731e37 is in state STARTED 2026-01-02 00:59:44.723364 | orchestrator | 2026-01-02 00:59:44 | INFO  | Task 461e371b-b47d-449a-ac0e-aad070881187 is in state STARTED 2026-01-02 00:59:44.724682 | orchestrator | 2026-01-02 00:59:44 | INFO  | Task 08808bed-091a-4cc8-b86b-5d44d61a63ab is in state SUCCESS 2026-01-02 00:59:44.725081 | orchestrator | 2026-01-02 00:59:44 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:59:47.781895 | orchestrator | 2026-01-02 00:59:47 | INFO  | Task d7222188-efef-442c-a1d8-36ff38f7aceb is in state STARTED 2026-01-02 00:59:47.784162 | orchestrator | 2026-01-02 00:59:47 | INFO  | Task 5d5b70fc-e8e9-41c0-b3a1-1554840ffe8b is in state STARTED 2026-01-02 00:59:47.786153 | orchestrator | 2026-01-02 00:59:47 | INFO  | Task 53b94dfa-2853-4b9c-bdf4-59ee39731e37 is in state STARTED 2026-01-02 00:59:47.789497 | orchestrator | 2026-01-02 00:59:47 | INFO  | Task 461e371b-b47d-449a-ac0e-aad070881187 is in state STARTED 2026-01-02 00:59:47.790816 | orchestrator | 2026-01-02 00:59:47 | INFO  | Task 252ce284-b578-4556-9078-002a21a79a55 is in state STARTED 2026-01-02 00:59:47.791178 | orchestrator | 2026-01-02 00:59:47 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:59:50.836944 | orchestrator | 2026-01-02 00:59:50 | INFO  | Task d7222188-efef-442c-a1d8-36ff38f7aceb is in state STARTED 2026-01-02 00:59:50.838295 | orchestrator | 2026-01-02 00:59:50 | INFO  | Task 5d5b70fc-e8e9-41c0-b3a1-1554840ffe8b is in state STARTED 2026-01-02 00:59:50.840252 | orchestrator | 2026-01-02 00:59:50 | INFO  | Task 53b94dfa-2853-4b9c-bdf4-59ee39731e37 is in state STARTED 2026-01-02 00:59:50.842136 | orchestrator | 2026-01-02 00:59:50 | INFO  | Task 461e371b-b47d-449a-ac0e-aad070881187 is in state STARTED 2026-01-02 00:59:50.844146 | orchestrator | 2026-01-02 00:59:50 | INFO  | Task 252ce284-b578-4556-9078-002a21a79a55 is in state STARTED 2026-01-02 00:59:50.844384 | orchestrator | 2026-01-02 00:59:50 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:59:53.887077 | orchestrator | 2026-01-02 00:59:53 | INFO  | Task d7222188-efef-442c-a1d8-36ff38f7aceb is in state STARTED 2026-01-02 00:59:53.890357 | orchestrator | 2026-01-02 00:59:53 | INFO  | Task 5d5b70fc-e8e9-41c0-b3a1-1554840ffe8b is in state STARTED 2026-01-02 00:59:53.892264 | orchestrator | 2026-01-02 00:59:53 | INFO  | Task 53b94dfa-2853-4b9c-bdf4-59ee39731e37 is in state STARTED 2026-01-02 00:59:53.894481 | orchestrator | 2026-01-02 00:59:53 | INFO  | Task 461e371b-b47d-449a-ac0e-aad070881187 is in state STARTED 2026-01-02 00:59:53.896393 | orchestrator | 2026-01-02 00:59:53 | INFO  | Task 252ce284-b578-4556-9078-002a21a79a55 is in state STARTED 2026-01-02 00:59:53.896424 | orchestrator | 2026-01-02 00:59:53 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:59:56.940738 | orchestrator | 2026-01-02 00:59:56 | INFO  | Task d7222188-efef-442c-a1d8-36ff38f7aceb is in state STARTED 2026-01-02 00:59:56.944762 | orchestrator | 2026-01-02 00:59:56 | INFO  | Task 5d5b70fc-e8e9-41c0-b3a1-1554840ffe8b is in state STARTED 2026-01-02 00:59:56.947212 | orchestrator | 2026-01-02 00:59:56 | INFO  | Task 53b94dfa-2853-4b9c-bdf4-59ee39731e37 is in state STARTED 2026-01-02 00:59:56.948852 | orchestrator | 2026-01-02 00:59:56 | INFO  | Task 461e371b-b47d-449a-ac0e-aad070881187 is in state STARTED 2026-01-02 00:59:56.951660 | orchestrator | 2026-01-02 00:59:56 | INFO  | Task 252ce284-b578-4556-9078-002a21a79a55 is in state STARTED 2026-01-02 00:59:56.951701 | orchestrator | 2026-01-02 00:59:56 | INFO  | Wait 1 second(s) until the next check 2026-01-02 00:59:59.996271 | orchestrator | 2026-01-02 00:59:59 | INFO  | Task d7222188-efef-442c-a1d8-36ff38f7aceb is in state STARTED 2026-01-02 00:59:59.997709 | orchestrator | 2026-01-02 00:59:59 | INFO  | Task 5d5b70fc-e8e9-41c0-b3a1-1554840ffe8b is in state STARTED 2026-01-02 01:00:00.001701 | orchestrator | 2026-01-02 01:00:00 | INFO  | Task 53b94dfa-2853-4b9c-bdf4-59ee39731e37 is in state STARTED 2026-01-02 01:00:00.003521 | orchestrator | 2026-01-02 01:00:00 | INFO  | Task 461e371b-b47d-449a-ac0e-aad070881187 is in state STARTED 2026-01-02 01:00:00.005811 | orchestrator | 2026-01-02 01:00:00 | INFO  | Task 252ce284-b578-4556-9078-002a21a79a55 is in state STARTED 2026-01-02 01:00:00.005855 | orchestrator | 2026-01-02 01:00:00 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:00:03.048858 | orchestrator | 2026-01-02 01:00:03 | INFO  | Task d7222188-efef-442c-a1d8-36ff38f7aceb is in state STARTED 2026-01-02 01:00:03.050189 | orchestrator | 2026-01-02 01:00:03 | INFO  | Task 5d5b70fc-e8e9-41c0-b3a1-1554840ffe8b is in state STARTED 2026-01-02 01:00:03.051055 | orchestrator | 2026-01-02 01:00:03 | INFO  | Task 53b94dfa-2853-4b9c-bdf4-59ee39731e37 is in state STARTED 2026-01-02 01:00:03.052375 | orchestrator | 2026-01-02 01:00:03 | INFO  | Task 461e371b-b47d-449a-ac0e-aad070881187 is in state STARTED 2026-01-02 01:00:03.052785 | orchestrator | 2026-01-02 01:00:03 | INFO  | Task 252ce284-b578-4556-9078-002a21a79a55 is in state STARTED 2026-01-02 01:00:03.052862 | orchestrator | 2026-01-02 01:00:03 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:00:06.096586 | orchestrator | 2026-01-02 01:00:06 | INFO  | Task d7222188-efef-442c-a1d8-36ff38f7aceb is in state STARTED 2026-01-02 01:00:06.097922 | orchestrator | 2026-01-02 01:00:06 | INFO  | Task 5d5b70fc-e8e9-41c0-b3a1-1554840ffe8b is in state STARTED 2026-01-02 01:00:06.100611 | orchestrator | 2026-01-02 01:00:06 | INFO  | Task 53b94dfa-2853-4b9c-bdf4-59ee39731e37 is in state STARTED 2026-01-02 01:00:06.102084 | orchestrator | 2026-01-02 01:00:06 | INFO  | Task 461e371b-b47d-449a-ac0e-aad070881187 is in state STARTED 2026-01-02 01:00:06.103256 | orchestrator | 2026-01-02 01:00:06 | INFO  | Task 252ce284-b578-4556-9078-002a21a79a55 is in state STARTED 2026-01-02 01:00:06.103866 | orchestrator | 2026-01-02 01:00:06 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:00:09.145904 | orchestrator | 2026-01-02 01:00:09 | INFO  | Task d7222188-efef-442c-a1d8-36ff38f7aceb is in state STARTED 2026-01-02 01:00:09.145998 | orchestrator | 2026-01-02 01:00:09 | INFO  | Task 5d5b70fc-e8e9-41c0-b3a1-1554840ffe8b is in state STARTED 2026-01-02 01:00:09.150188 | orchestrator | 2026-01-02 01:00:09.150240 | orchestrator | 2026-01-02 01:00:09.150248 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2026-01-02 01:00:09.150255 | orchestrator | 2026-01-02 01:00:09.150269 | orchestrator | TASK [Check if ceph keys exist] ************************************************ 2026-01-02 01:00:09.150275 | orchestrator | Friday 02 January 2026 00:59:12 +0000 (0:00:00.185) 0:00:00.185 ******** 2026-01-02 01:00:09.150280 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-01-02 01:00:09.150286 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-01-02 01:00:09.150291 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-01-02 01:00:09.150296 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-01-02 01:00:09.150338 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-01-02 01:00:09.150390 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-01-02 01:00:09.150397 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-01-02 01:00:09.150403 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-01-02 01:00:09.150421 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-01-02 01:00:09.150427 | orchestrator | 2026-01-02 01:00:09.150432 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2026-01-02 01:00:09.150437 | orchestrator | Friday 02 January 2026 00:59:16 +0000 (0:00:04.580) 0:00:04.765 ******** 2026-01-02 01:00:09.150443 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-01-02 01:00:09.150448 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-01-02 01:00:09.150453 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-01-02 01:00:09.150458 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-01-02 01:00:09.150463 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-01-02 01:00:09.150468 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-01-02 01:00:09.150473 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-01-02 01:00:09.150478 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-01-02 01:00:09.150484 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-01-02 01:00:09.150489 | orchestrator | 2026-01-02 01:00:09.150494 | orchestrator | TASK [Create share directory] ************************************************** 2026-01-02 01:00:09.150499 | orchestrator | Friday 02 January 2026 00:59:21 +0000 (0:00:04.361) 0:00:09.127 ******** 2026-01-02 01:00:09.150505 | orchestrator | changed: [testbed-manager -> localhost] 2026-01-02 01:00:09.150510 | orchestrator | 2026-01-02 01:00:09.150515 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2026-01-02 01:00:09.150521 | orchestrator | Friday 02 January 2026 00:59:21 +0000 (0:00:00.897) 0:00:10.024 ******** 2026-01-02 01:00:09.150526 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2026-01-02 01:00:09.150532 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-01-02 01:00:09.150537 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-01-02 01:00:09.150543 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2026-01-02 01:00:09.150548 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-01-02 01:00:09.150553 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2026-01-02 01:00:09.150558 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2026-01-02 01:00:09.150563 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2026-01-02 01:00:09.150568 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2026-01-02 01:00:09.150573 | orchestrator | 2026-01-02 01:00:09.150579 | orchestrator | TASK [Check if target directories exist] *************************************** 2026-01-02 01:00:09.150584 | orchestrator | Friday 02 January 2026 00:59:34 +0000 (0:00:12.414) 0:00:22.438 ******** 2026-01-02 01:00:09.150589 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/infrastructure/files/ceph) 2026-01-02 01:00:09.150594 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-volume) 2026-01-02 01:00:09.150600 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-01-02 01:00:09.150605 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-01-02 01:00:09.150624 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-01-02 01:00:09.150633 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-01-02 01:00:09.150638 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/glance) 2026-01-02 01:00:09.150643 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/gnocchi) 2026-01-02 01:00:09.150649 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/manila) 2026-01-02 01:00:09.150654 | orchestrator | 2026-01-02 01:00:09.150659 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2026-01-02 01:00:09.150664 | orchestrator | Friday 02 January 2026 00:59:37 +0000 (0:00:03.049) 0:00:25.488 ******** 2026-01-02 01:00:09.150670 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2026-01-02 01:00:09.150675 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-01-02 01:00:09.150680 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-01-02 01:00:09.150685 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2026-01-02 01:00:09.150690 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-01-02 01:00:09.150695 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2026-01-02 01:00:09.150700 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2026-01-02 01:00:09.150706 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2026-01-02 01:00:09.150711 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2026-01-02 01:00:09.150716 | orchestrator | 2026-01-02 01:00:09.150721 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-02 01:00:09.150726 | orchestrator | testbed-manager : ok=6  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-02 01:00:09.150732 | orchestrator | 2026-01-02 01:00:09.150737 | orchestrator | 2026-01-02 01:00:09.150742 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-02 01:00:09.150747 | orchestrator | Friday 02 January 2026 00:59:44 +0000 (0:00:06.800) 0:00:32.289 ******** 2026-01-02 01:00:09.150752 | orchestrator | =============================================================================== 2026-01-02 01:00:09.150757 | orchestrator | Write ceph keys to the share directory --------------------------------- 12.41s 2026-01-02 01:00:09.150762 | orchestrator | Write ceph keys to the configuration directory -------------------------- 6.80s 2026-01-02 01:00:09.150767 | orchestrator | Check if ceph keys exist ------------------------------------------------ 4.58s 2026-01-02 01:00:09.150773 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 4.36s 2026-01-02 01:00:09.150778 | orchestrator | Check if target directories exist --------------------------------------- 3.05s 2026-01-02 01:00:09.150785 | orchestrator | Create share directory -------------------------------------------------- 0.90s 2026-01-02 01:00:09.150791 | orchestrator | 2026-01-02 01:00:09.150797 | orchestrator | 2026-01-02 01:00:09.150803 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-02 01:00:09.150810 | orchestrator | 2026-01-02 01:00:09.150816 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-02 01:00:09.150822 | orchestrator | Friday 02 January 2026 00:58:14 +0000 (0:00:00.257) 0:00:00.257 ******** 2026-01-02 01:00:09.150828 | orchestrator | ok: [testbed-node-0] 2026-01-02 01:00:09.150835 | orchestrator | ok: [testbed-node-1] 2026-01-02 01:00:09.150841 | orchestrator | ok: [testbed-node-2] 2026-01-02 01:00:09.150847 | orchestrator | 2026-01-02 01:00:09.150853 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-02 01:00:09.150859 | orchestrator | Friday 02 January 2026 00:58:14 +0000 (0:00:00.306) 0:00:00.563 ******** 2026-01-02 01:00:09.150865 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2026-01-02 01:00:09.150875 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2026-01-02 01:00:09.150881 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2026-01-02 01:00:09.150887 | orchestrator | 2026-01-02 01:00:09.150893 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2026-01-02 01:00:09.150900 | orchestrator | 2026-01-02 01:00:09.150907 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-01-02 01:00:09.150913 | orchestrator | Friday 02 January 2026 00:58:14 +0000 (0:00:00.366) 0:00:00.930 ******** 2026-01-02 01:00:09.150920 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 01:00:09.150926 | orchestrator | 2026-01-02 01:00:09.150931 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2026-01-02 01:00:09.150937 | orchestrator | Friday 02 January 2026 00:58:15 +0000 (0:00:00.495) 0:00:01.425 ******** 2026-01-02 01:00:09.150955 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-02 01:00:09.150963 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-02 01:00:09.150980 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-02 01:00:09.150987 | orchestrator | 2026-01-02 01:00:09.150992 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2026-01-02 01:00:09.150997 | orchestrator | Friday 02 January 2026 00:58:16 +0000 (0:00:01.057) 0:00:02.482 ******** 2026-01-02 01:00:09.151002 | orchestrator | ok: [testbed-node-0] 2026-01-02 01:00:09.151008 | orchestrator | ok: [testbed-node-1] 2026-01-02 01:00:09.151013 | orchestrator | ok: [testbed-node-2] 2026-01-02 01:00:09.151021 | orchestrator | 2026-01-02 01:00:09.151026 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-01-02 01:00:09.151032 | orchestrator | Friday 02 January 2026 00:58:16 +0000 (0:00:00.390) 0:00:02.872 ******** 2026-01-02 01:00:09.151037 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2026-01-02 01:00:09.151042 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2026-01-02 01:00:09.151047 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2026-01-02 01:00:09.151053 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2026-01-02 01:00:09.151058 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2026-01-02 01:00:09.151063 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2026-01-02 01:00:09.151068 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2026-01-02 01:00:09.151073 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2026-01-02 01:00:09.151078 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2026-01-02 01:00:09.151083 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2026-01-02 01:00:09.151088 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2026-01-02 01:00:09.151093 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2026-01-02 01:00:09.151098 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2026-01-02 01:00:09.151104 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2026-01-02 01:00:09.151109 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2026-01-02 01:00:09.151114 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2026-01-02 01:00:09.151119 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2026-01-02 01:00:09.151124 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2026-01-02 01:00:09.151129 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2026-01-02 01:00:09.151134 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2026-01-02 01:00:09.151142 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2026-01-02 01:00:09.151151 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2026-01-02 01:00:09.151156 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2026-01-02 01:00:09.151161 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2026-01-02 01:00:09.151167 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2026-01-02 01:00:09.151173 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2026-01-02 01:00:09.151178 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2026-01-02 01:00:09.151183 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2026-01-02 01:00:09.151188 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2026-01-02 01:00:09.151194 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2026-01-02 01:00:09.151202 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2026-01-02 01:00:09.151207 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2026-01-02 01:00:09.151212 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2026-01-02 01:00:09.151217 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2026-01-02 01:00:09.151222 | orchestrator | 2026-01-02 01:00:09.151228 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-02 01:00:09.151233 | orchestrator | Friday 02 January 2026 00:58:17 +0000 (0:00:00.625) 0:00:03.498 ******** 2026-01-02 01:00:09.151238 | orchestrator | ok: [testbed-node-0] 2026-01-02 01:00:09.151243 | orchestrator | ok: [testbed-node-1] 2026-01-02 01:00:09.151248 | orchestrator | ok: [testbed-node-2] 2026-01-02 01:00:09.151253 | orchestrator | 2026-01-02 01:00:09.151258 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-02 01:00:09.151264 | orchestrator | Friday 02 January 2026 00:58:17 +0000 (0:00:00.272) 0:00:03.771 ******** 2026-01-02 01:00:09.151269 | orchestrator | skipping: [testbed-node-0] 2026-01-02 01:00:09.151274 | orchestrator | 2026-01-02 01:00:09.151279 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-02 01:00:09.151284 | orchestrator | Friday 02 January 2026 00:58:17 +0000 (0:00:00.108) 0:00:03.880 ******** 2026-01-02 01:00:09.151289 | orchestrator | skipping: [testbed-node-0] 2026-01-02 01:00:09.151294 | orchestrator | skipping: [testbed-node-1] 2026-01-02 01:00:09.151299 | orchestrator | skipping: [testbed-node-2] 2026-01-02 01:00:09.151318 | orchestrator | 2026-01-02 01:00:09.151323 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-02 01:00:09.151328 | orchestrator | Friday 02 January 2026 00:58:18 +0000 (0:00:00.490) 0:00:04.370 ******** 2026-01-02 01:00:09.151334 | orchestrator | ok: [testbed-node-0] 2026-01-02 01:00:09.151339 | orchestrator | ok: [testbed-node-1] 2026-01-02 01:00:09.151344 | orchestrator | ok: [testbed-node-2] 2026-01-02 01:00:09.151349 | orchestrator | 2026-01-02 01:00:09.151354 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-02 01:00:09.151359 | orchestrator | Friday 02 January 2026 00:58:18 +0000 (0:00:00.273) 0:00:04.644 ******** 2026-01-02 01:00:09.151365 | orchestrator | skipping: [testbed-node-0] 2026-01-02 01:00:09.151370 | orchestrator | 2026-01-02 01:00:09.151375 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-02 01:00:09.151380 | orchestrator | Friday 02 January 2026 00:58:18 +0000 (0:00:00.119) 0:00:04.763 ******** 2026-01-02 01:00:09.151385 | orchestrator | skipping: [testbed-node-0] 2026-01-02 01:00:09.151390 | orchestrator | skipping: [testbed-node-1] 2026-01-02 01:00:09.151395 | orchestrator | skipping: [testbed-node-2] 2026-01-02 01:00:09.151400 | orchestrator | 2026-01-02 01:00:09.151405 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-02 01:00:09.151411 | orchestrator | Friday 02 January 2026 00:58:19 +0000 (0:00:00.242) 0:00:05.006 ******** 2026-01-02 01:00:09.151416 | orchestrator | ok: [testbed-node-0] 2026-01-02 01:00:09.151421 | orchestrator | ok: [testbed-node-1] 2026-01-02 01:00:09.151426 | orchestrator | ok: [testbed-node-2] 2026-01-02 01:00:09.151431 | orchestrator | 2026-01-02 01:00:09.151436 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-02 01:00:09.151441 | orchestrator | Friday 02 January 2026 00:58:19 +0000 (0:00:00.269) 0:00:05.276 ******** 2026-01-02 01:00:09.151447 | orchestrator | skipping: [testbed-node-0] 2026-01-02 01:00:09.151452 | orchestrator | 2026-01-02 01:00:09.151457 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-02 01:00:09.151466 | orchestrator | Friday 02 January 2026 00:58:19 +0000 (0:00:00.235) 0:00:05.511 ******** 2026-01-02 01:00:09.151473 | orchestrator | skipping: [testbed-node-0] 2026-01-02 01:00:09.151479 | orchestrator | skipping: [testbed-node-1] 2026-01-02 01:00:09.151484 | orchestrator | skipping: [testbed-node-2] 2026-01-02 01:00:09.151489 | orchestrator | 2026-01-02 01:00:09.151497 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-02 01:00:09.151502 | orchestrator | Friday 02 January 2026 00:58:19 +0000 (0:00:00.260) 0:00:05.771 ******** 2026-01-02 01:00:09.151508 | orchestrator | ok: [testbed-node-0] 2026-01-02 01:00:09.151513 | orchestrator | ok: [testbed-node-1] 2026-01-02 01:00:09.151518 | orchestrator | ok: [testbed-node-2] 2026-01-02 01:00:09.151523 | orchestrator | 2026-01-02 01:00:09.151528 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-02 01:00:09.151533 | orchestrator | Friday 02 January 2026 00:58:20 +0000 (0:00:00.319) 0:00:06.091 ******** 2026-01-02 01:00:09.151539 | orchestrator | skipping: [testbed-node-0] 2026-01-02 01:00:09.151544 | orchestrator | 2026-01-02 01:00:09.151549 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-02 01:00:09.151554 | orchestrator | Friday 02 January 2026 00:58:20 +0000 (0:00:00.131) 0:00:06.223 ******** 2026-01-02 01:00:09.151559 | orchestrator | skipping: [testbed-node-0] 2026-01-02 01:00:09.151564 | orchestrator | skipping: [testbed-node-1] 2026-01-02 01:00:09.151569 | orchestrator | skipping: [testbed-node-2] 2026-01-02 01:00:09.151574 | orchestrator | 2026-01-02 01:00:09.151579 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-02 01:00:09.151585 | orchestrator | Friday 02 January 2026 00:58:20 +0000 (0:00:00.286) 0:00:06.509 ******** 2026-01-02 01:00:09.151590 | orchestrator | ok: [testbed-node-0] 2026-01-02 01:00:09.151595 | orchestrator | ok: [testbed-node-1] 2026-01-02 01:00:09.151600 | orchestrator | ok: [testbed-node-2] 2026-01-02 01:00:09.151605 | orchestrator | 2026-01-02 01:00:09.151610 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-02 01:00:09.151616 | orchestrator | Friday 02 January 2026 00:58:21 +0000 (0:00:00.483) 0:00:06.993 ******** 2026-01-02 01:00:09.151621 | orchestrator | skipping: [testbed-node-0] 2026-01-02 01:00:09.151626 | orchestrator | 2026-01-02 01:00:09.151631 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-02 01:00:09.151636 | orchestrator | Friday 02 January 2026 00:58:21 +0000 (0:00:00.134) 0:00:07.128 ******** 2026-01-02 01:00:09.151641 | orchestrator | skipping: [testbed-node-0] 2026-01-02 01:00:09.151648 | orchestrator | skipping: [testbed-node-1] 2026-01-02 01:00:09.151657 | orchestrator | skipping: [testbed-node-2] 2026-01-02 01:00:09.151665 | orchestrator | 2026-01-02 01:00:09.151673 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-02 01:00:09.151681 | orchestrator | Friday 02 January 2026 00:58:21 +0000 (0:00:00.309) 0:00:07.438 ******** 2026-01-02 01:00:09.151690 | orchestrator | ok: [testbed-node-0] 2026-01-02 01:00:09.151697 | orchestrator | ok: [testbed-node-1] 2026-01-02 01:00:09.151706 | orchestrator | ok: [testbed-node-2] 2026-01-02 01:00:09.151713 | orchestrator | 2026-01-02 01:00:09.151721 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-02 01:00:09.151729 | orchestrator | Friday 02 January 2026 00:58:21 +0000 (0:00:00.318) 0:00:07.756 ******** 2026-01-02 01:00:09.151737 | orchestrator | skipping: [testbed-node-0] 2026-01-02 01:00:09.151745 | orchestrator | 2026-01-02 01:00:09.151754 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-02 01:00:09.151762 | orchestrator | Friday 02 January 2026 00:58:21 +0000 (0:00:00.128) 0:00:07.884 ******** 2026-01-02 01:00:09.151770 | orchestrator | skipping: [testbed-node-0] 2026-01-02 01:00:09.151778 | orchestrator | skipping: [testbed-node-1] 2026-01-02 01:00:09.151785 | orchestrator | skipping: [testbed-node-2] 2026-01-02 01:00:09.151794 | orchestrator | 2026-01-02 01:00:09.151802 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-02 01:00:09.151818 | orchestrator | Friday 02 January 2026 00:58:22 +0000 (0:00:00.308) 0:00:08.193 ******** 2026-01-02 01:00:09.151826 | orchestrator | ok: [testbed-node-0] 2026-01-02 01:00:09.151835 | orchestrator | ok: [testbed-node-1] 2026-01-02 01:00:09.151842 | orchestrator | ok: [testbed-node-2] 2026-01-02 01:00:09.151848 | orchestrator | 2026-01-02 01:00:09.151853 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-02 01:00:09.151858 | orchestrator | Friday 02 January 2026 00:58:22 +0000 (0:00:00.536) 0:00:08.729 ******** 2026-01-02 01:00:09.151863 | orchestrator | skipping: [testbed-node-0] 2026-01-02 01:00:09.151868 | orchestrator | 2026-01-02 01:00:09.151874 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-02 01:00:09.151879 | orchestrator | Friday 02 January 2026 00:58:22 +0000 (0:00:00.142) 0:00:08.871 ******** 2026-01-02 01:00:09.151884 | orchestrator | skipping: [testbed-node-0] 2026-01-02 01:00:09.151889 | orchestrator | skipping: [testbed-node-1] 2026-01-02 01:00:09.151894 | orchestrator | skipping: [testbed-node-2] 2026-01-02 01:00:09.151899 | orchestrator | 2026-01-02 01:00:09.151904 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-02 01:00:09.151909 | orchestrator | Friday 02 January 2026 00:58:23 +0000 (0:00:00.305) 0:00:09.176 ******** 2026-01-02 01:00:09.151914 | orchestrator | ok: [testbed-node-0] 2026-01-02 01:00:09.151920 | orchestrator | ok: [testbed-node-1] 2026-01-02 01:00:09.151925 | orchestrator | ok: [testbed-node-2] 2026-01-02 01:00:09.151930 | orchestrator | 2026-01-02 01:00:09.151935 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-02 01:00:09.151940 | orchestrator | Friday 02 January 2026 00:58:23 +0000 (0:00:00.309) 0:00:09.486 ******** 2026-01-02 01:00:09.151945 | orchestrator | skipping: [testbed-node-0] 2026-01-02 01:00:09.151950 | orchestrator | 2026-01-02 01:00:09.151955 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-02 01:00:09.151960 | orchestrator | Friday 02 January 2026 00:58:23 +0000 (0:00:00.134) 0:00:09.620 ******** 2026-01-02 01:00:09.151965 | orchestrator | skipping: [testbed-node-0] 2026-01-02 01:00:09.151970 | orchestrator | skipping: [testbed-node-1] 2026-01-02 01:00:09.151975 | orchestrator | skipping: [testbed-node-2] 2026-01-02 01:00:09.151981 | orchestrator | 2026-01-02 01:00:09.151986 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-02 01:00:09.151991 | orchestrator | Friday 02 January 2026 00:58:24 +0000 (0:00:00.509) 0:00:10.130 ******** 2026-01-02 01:00:09.151996 | orchestrator | ok: [testbed-node-0] 2026-01-02 01:00:09.152001 | orchestrator | ok: [testbed-node-1] 2026-01-02 01:00:09.152006 | orchestrator | ok: [testbed-node-2] 2026-01-02 01:00:09.152011 | orchestrator | 2026-01-02 01:00:09.152021 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-02 01:00:09.152026 | orchestrator | Friday 02 January 2026 00:58:24 +0000 (0:00:00.320) 0:00:10.451 ******** 2026-01-02 01:00:09.152031 | orchestrator | skipping: [testbed-node-0] 2026-01-02 01:00:09.152037 | orchestrator | 2026-01-02 01:00:09.152042 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-02 01:00:09.152047 | orchestrator | Friday 02 January 2026 00:58:24 +0000 (0:00:00.150) 0:00:10.602 ******** 2026-01-02 01:00:09.152052 | orchestrator | skipping: [testbed-node-0] 2026-01-02 01:00:09.152057 | orchestrator | skipping: [testbed-node-1] 2026-01-02 01:00:09.152063 | orchestrator | skipping: [testbed-node-2] 2026-01-02 01:00:09.152068 | orchestrator | 2026-01-02 01:00:09.152073 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-02 01:00:09.152078 | orchestrator | Friday 02 January 2026 00:58:24 +0000 (0:00:00.300) 0:00:10.902 ******** 2026-01-02 01:00:09.152083 | orchestrator | ok: [testbed-node-0] 2026-01-02 01:00:09.152088 | orchestrator | ok: [testbed-node-1] 2026-01-02 01:00:09.152093 | orchestrator | ok: [testbed-node-2] 2026-01-02 01:00:09.152098 | orchestrator | 2026-01-02 01:00:09.152104 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-02 01:00:09.152109 | orchestrator | Friday 02 January 2026 00:58:25 +0000 (0:00:00.312) 0:00:11.215 ******** 2026-01-02 01:00:09.152121 | orchestrator | skipping: [testbed-node-0] 2026-01-02 01:00:09.152126 | orchestrator | 2026-01-02 01:00:09.152131 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-02 01:00:09.152136 | orchestrator | Friday 02 January 2026 00:58:25 +0000 (0:00:00.140) 0:00:11.355 ******** 2026-01-02 01:00:09.152141 | orchestrator | skipping: [testbed-node-0] 2026-01-02 01:00:09.152146 | orchestrator | skipping: [testbed-node-1] 2026-01-02 01:00:09.152151 | orchestrator | skipping: [testbed-node-2] 2026-01-02 01:00:09.152157 | orchestrator | 2026-01-02 01:00:09.152162 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2026-01-02 01:00:09.152167 | orchestrator | Friday 02 January 2026 00:58:25 +0000 (0:00:00.504) 0:00:11.860 ******** 2026-01-02 01:00:09.152172 | orchestrator | changed: [testbed-node-0] 2026-01-02 01:00:09.152177 | orchestrator | changed: [testbed-node-2] 2026-01-02 01:00:09.152182 | orchestrator | changed: [testbed-node-1] 2026-01-02 01:00:09.152187 | orchestrator | 2026-01-02 01:00:09.152193 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2026-01-02 01:00:09.152198 | orchestrator | Friday 02 January 2026 00:58:27 +0000 (0:00:01.724) 0:00:13.585 ******** 2026-01-02 01:00:09.152203 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-01-02 01:00:09.152208 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-01-02 01:00:09.152213 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-01-02 01:00:09.152219 | orchestrator | 2026-01-02 01:00:09.152224 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2026-01-02 01:00:09.152229 | orchestrator | Friday 02 January 2026 00:58:29 +0000 (0:00:02.003) 0:00:15.588 ******** 2026-01-02 01:00:09.152235 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-01-02 01:00:09.152240 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-01-02 01:00:09.152245 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-01-02 01:00:09.152251 | orchestrator | 2026-01-02 01:00:09.152256 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2026-01-02 01:00:09.152261 | orchestrator | Friday 02 January 2026 00:58:32 +0000 (0:00:02.445) 0:00:18.033 ******** 2026-01-02 01:00:09.152266 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-01-02 01:00:09.152271 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-01-02 01:00:09.152276 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-01-02 01:00:09.152282 | orchestrator | 2026-01-02 01:00:09.152287 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2026-01-02 01:00:09.152292 | orchestrator | Friday 02 January 2026 00:58:34 +0000 (0:00:02.155) 0:00:20.189 ******** 2026-01-02 01:00:09.152297 | orchestrator | skipping: [testbed-node-0] 2026-01-02 01:00:09.152327 | orchestrator | skipping: [testbed-node-1] 2026-01-02 01:00:09.152332 | orchestrator | skipping: [testbed-node-2] 2026-01-02 01:00:09.152337 | orchestrator | 2026-01-02 01:00:09.152342 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2026-01-02 01:00:09.152348 | orchestrator | Friday 02 January 2026 00:58:34 +0000 (0:00:00.344) 0:00:20.534 ******** 2026-01-02 01:00:09.152353 | orchestrator | skipping: [testbed-node-0] 2026-01-02 01:00:09.152358 | orchestrator | skipping: [testbed-node-1] 2026-01-02 01:00:09.152363 | orchestrator | skipping: [testbed-node-2] 2026-01-02 01:00:09.152368 | orchestrator | 2026-01-02 01:00:09.152373 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-01-02 01:00:09.152378 | orchestrator | Friday 02 January 2026 00:58:34 +0000 (0:00:00.280) 0:00:20.814 ******** 2026-01-02 01:00:09.152405 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 01:00:09.152411 | orchestrator | 2026-01-02 01:00:09.152416 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2026-01-02 01:00:09.152421 | orchestrator | Friday 02 January 2026 00:58:35 +0000 (0:00:00.761) 0:00:21.576 ******** 2026-01-02 01:00:09.152436 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-02 01:00:09.152654 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-02 01:00:09.152674 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-02 01:00:09.152680 | orchestrator | 2026-01-02 01:00:09.152686 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2026-01-02 01:00:09.152691 | orchestrator | Friday 02 January 2026 00:58:37 +0000 (0:00:01.762) 0:00:23.339 ******** 2026-01-02 01:00:09.152704 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-02 01:00:09.152715 | orchestrator | skipping: [testbed-node-0] 2026-01-02 01:00:09.152721 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-02 01:00:09.152727 | orchestrator | skipping: [testbed-node-1] 2026-01-02 01:00:09.152739 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-02 01:00:09.152748 | orchestrator | skipping: [testbed-node-2] 2026-01-02 01:00:09.152754 | orchestrator | 2026-01-02 01:00:09.152759 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2026-01-02 01:00:09.152764 | orchestrator | Friday 02 January 2026 00:58:38 +0000 (0:00:00.686) 0:00:24.025 ******** 2026-01-02 01:00:09.152770 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-02 01:00:09.152778 | orchestrator | skipping: [testbed-node-0] 2026-01-02 01:00:09.152790 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-02 01:00:09.152797 | orchestrator | skipping: [testbed-node-1] 2026-01-02 01:00:09.152802 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-02 01:00:09.152811 | orchestrator | skipping: [testbed-node-2] 2026-01-02 01:00:09.152816 | orchestrator | 2026-01-02 01:00:09.152821 | orchestrator | TASK [service-check-containers : horizon | Check containers] ******************* 2026-01-02 01:00:09.152827 | orchestrator | Friday 02 January 2026 00:58:38 +0000 (0:00:00.800) 0:00:24.825 ******** 2026-01-02 01:00:09.152840 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-02 01:00:09.152860 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-02 01:00:09.152879 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-02 01:00:09.152889 | orchestrator | 2026-01-02 01:00:09.152898 | orchestrator | TASK [service-check-containers : horizon | Notify handlers to restart containers] *** 2026-01-02 01:00:09.152907 | orchestrator | Friday 02 January 2026 00:58:40 +0000 (0:00:01.932) 0:00:26.757 ******** 2026-01-02 01:00:09.152919 | orchestrator | changed: [testbed-node-0] => { 2026-01-02 01:00:09.152927 | orchestrator |  "msg": "Notifying handlers" 2026-01-02 01:00:09.152935 | orchestrator | } 2026-01-02 01:00:09.152944 | orchestrator | changed: [testbed-node-1] => { 2026-01-02 01:00:09.152953 | orchestrator |  "msg": "Notifying handlers" 2026-01-02 01:00:09.152962 | orchestrator | } 2026-01-02 01:00:09.152970 | orchestrator | changed: [testbed-node-2] => { 2026-01-02 01:00:09.152979 | orchestrator |  "msg": "Notifying handlers" 2026-01-02 01:00:09.152988 | orchestrator | } 2026-01-02 01:00:09.152997 | orchestrator | 2026-01-02 01:00:09.153003 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-01-02 01:00:09.153008 | orchestrator | Friday 02 January 2026 00:58:41 +0000 (0:00:00.358) 0:00:27.116 ******** 2026-01-02 01:00:09.153022 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-02 01:00:09.153029 | orchestrator | skipping: [testbed-node-0] 2026-01-02 01:00:09.153034 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-02 01:00:09.153044 | orchestrator | skipping: [testbed-node-1] 2026-01-02 01:00:09.153056 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-02 01:00:09.153062 | orchestrator | skipping: [testbed-node-2] 2026-01-02 01:00:09.153067 | orchestrator | 2026-01-02 01:00:09.153072 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-01-02 01:00:09.153078 | orchestrator | Friday 02 January 2026 00:58:42 +0000 (0:00:00.969) 0:00:28.085 ******** 2026-01-02 01:00:09.153083 | orchestrator | skipping: [testbed-node-0] 2026-01-02 01:00:09.153088 | orchestrator | skipping: [testbed-node-1] 2026-01-02 01:00:09.153097 | orchestrator | skipping: [testbed-node-2] 2026-01-02 01:00:09.153102 | orchestrator | 2026-01-02 01:00:09.153107 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-01-02 01:00:09.153112 | orchestrator | Friday 02 January 2026 00:58:42 +0000 (0:00:00.525) 0:00:28.611 ******** 2026-01-02 01:00:09.153117 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 01:00:09.153123 | orchestrator | 2026-01-02 01:00:09.153128 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2026-01-02 01:00:09.153133 | orchestrator | Friday 02 January 2026 00:58:43 +0000 (0:00:00.562) 0:00:29.173 ******** 2026-01-02 01:00:09.153138 | orchestrator | changed: [testbed-node-0] 2026-01-02 01:00:09.153143 | orchestrator | 2026-01-02 01:00:09.153149 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2026-01-02 01:00:09.153154 | orchestrator | Friday 02 January 2026 00:58:45 +0000 (0:00:02.642) 0:00:31.816 ******** 2026-01-02 01:00:09.153159 | orchestrator | changed: [testbed-node-0] 2026-01-02 01:00:09.153164 | orchestrator | 2026-01-02 01:00:09.153169 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2026-01-02 01:00:09.153175 | orchestrator | Friday 02 January 2026 00:58:48 +0000 (0:00:02.427) 0:00:34.243 ******** 2026-01-02 01:00:09.153180 | orchestrator | changed: [testbed-node-0] 2026-01-02 01:00:09.153185 | orchestrator | 2026-01-02 01:00:09.153190 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-01-02 01:00:09.153195 | orchestrator | Friday 02 January 2026 00:59:05 +0000 (0:00:16.993) 0:00:51.237 ******** 2026-01-02 01:00:09.153200 | orchestrator | 2026-01-02 01:00:09.153206 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-01-02 01:00:09.153211 | orchestrator | Friday 02 January 2026 00:59:05 +0000 (0:00:00.063) 0:00:51.300 ******** 2026-01-02 01:00:09.153216 | orchestrator | 2026-01-02 01:00:09.153221 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-01-02 01:00:09.153226 | orchestrator | Friday 02 January 2026 00:59:05 +0000 (0:00:00.226) 0:00:51.527 ******** 2026-01-02 01:00:09.153231 | orchestrator | 2026-01-02 01:00:09.153237 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2026-01-02 01:00:09.153242 | orchestrator | Friday 02 January 2026 00:59:05 +0000 (0:00:00.065) 0:00:51.592 ******** 2026-01-02 01:00:09.153247 | orchestrator | changed: [testbed-node-0] 2026-01-02 01:00:09.153252 | orchestrator | changed: [testbed-node-2] 2026-01-02 01:00:09.153257 | orchestrator | changed: [testbed-node-1] 2026-01-02 01:00:09.153262 | orchestrator | 2026-01-02 01:00:09.153268 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-02 01:00:09.153273 | orchestrator | testbed-node-0 : ok=38  changed=12  unreachable=0 failed=0 skipped=26  rescued=0 ignored=0 2026-01-02 01:00:09.153278 | orchestrator | testbed-node-1 : ok=35  changed=9  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2026-01-02 01:00:09.153286 | orchestrator | testbed-node-2 : ok=35  changed=9  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2026-01-02 01:00:09.153293 | orchestrator | 2026-01-02 01:00:09.153300 | orchestrator | 2026-01-02 01:00:09.153344 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-02 01:00:09.153351 | orchestrator | Friday 02 January 2026 01:00:05 +0000 (0:01:00.041) 0:01:51.634 ******** 2026-01-02 01:00:09.153357 | orchestrator | =============================================================================== 2026-01-02 01:00:09.153363 | orchestrator | horizon : Restart horizon container ------------------------------------ 60.04s 2026-01-02 01:00:09.153370 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 16.99s 2026-01-02 01:00:09.153376 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.64s 2026-01-02 01:00:09.153383 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 2.45s 2026-01-02 01:00:09.153393 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.43s 2026-01-02 01:00:09.153399 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 2.16s 2026-01-02 01:00:09.153405 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 2.00s 2026-01-02 01:00:09.153411 | orchestrator | service-check-containers : horizon | Check containers ------------------- 1.93s 2026-01-02 01:00:09.153417 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.76s 2026-01-02 01:00:09.153424 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.72s 2026-01-02 01:00:09.153430 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.06s 2026-01-02 01:00:09.153436 | orchestrator | service-check-containers : Include tasks -------------------------------- 0.97s 2026-01-02 01:00:09.153443 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 0.80s 2026-01-02 01:00:09.153449 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.76s 2026-01-02 01:00:09.153456 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.69s 2026-01-02 01:00:09.153462 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.63s 2026-01-02 01:00:09.153468 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.56s 2026-01-02 01:00:09.153475 | orchestrator | horizon : Update policy file name --------------------------------------- 0.54s 2026-01-02 01:00:09.153481 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.53s 2026-01-02 01:00:09.153487 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.51s 2026-01-02 01:00:09.153494 | orchestrator | 2026-01-02 01:00:09 | INFO  | Task 53b94dfa-2853-4b9c-bdf4-59ee39731e37 is in state SUCCESS 2026-01-02 01:00:09.153500 | orchestrator | 2026-01-02 01:00:09 | INFO  | Task 4979e630-3360-4cea-a090-fc1f1ee5eb66 is in state STARTED 2026-01-02 01:00:09.153506 | orchestrator | 2026-01-02 01:00:09 | INFO  | Task 461e371b-b47d-449a-ac0e-aad070881187 is in state STARTED 2026-01-02 01:00:09.153513 | orchestrator | 2026-01-02 01:00:09 | INFO  | Task 252ce284-b578-4556-9078-002a21a79a55 is in state STARTED 2026-01-02 01:00:09.153519 | orchestrator | 2026-01-02 01:00:09 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:00:12.203156 | orchestrator | 2026-01-02 01:00:12 | INFO  | Task d7222188-efef-442c-a1d8-36ff38f7aceb is in state STARTED 2026-01-02 01:00:12.204737 | orchestrator | 2026-01-02 01:00:12 | INFO  | Task 5d5b70fc-e8e9-41c0-b3a1-1554840ffe8b is in state STARTED 2026-01-02 01:00:12.206281 | orchestrator | 2026-01-02 01:00:12 | INFO  | Task 4979e630-3360-4cea-a090-fc1f1ee5eb66 is in state STARTED 2026-01-02 01:00:12.208054 | orchestrator | 2026-01-02 01:00:12 | INFO  | Task 461e371b-b47d-449a-ac0e-aad070881187 is in state STARTED 2026-01-02 01:00:12.209536 | orchestrator | 2026-01-02 01:00:12 | INFO  | Task 252ce284-b578-4556-9078-002a21a79a55 is in state STARTED 2026-01-02 01:00:12.209740 | orchestrator | 2026-01-02 01:00:12 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:00:15.254598 | orchestrator | 2026-01-02 01:00:15 | INFO  | Task d7222188-efef-442c-a1d8-36ff38f7aceb is in state STARTED 2026-01-02 01:00:15.255203 | orchestrator | 2026-01-02 01:00:15 | INFO  | Task 5d5b70fc-e8e9-41c0-b3a1-1554840ffe8b is in state STARTED 2026-01-02 01:00:15.257151 | orchestrator | 2026-01-02 01:00:15 | INFO  | Task 4979e630-3360-4cea-a090-fc1f1ee5eb66 is in state STARTED 2026-01-02 01:00:15.258769 | orchestrator | 2026-01-02 01:00:15 | INFO  | Task 461e371b-b47d-449a-ac0e-aad070881187 is in state STARTED 2026-01-02 01:00:15.260245 | orchestrator | 2026-01-02 01:00:15 | INFO  | Task 252ce284-b578-4556-9078-002a21a79a55 is in state STARTED 2026-01-02 01:00:15.260330 | orchestrator | 2026-01-02 01:00:15 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:00:18.311179 | orchestrator | 2026-01-02 01:00:18 | INFO  | Task d7222188-efef-442c-a1d8-36ff38f7aceb is in state STARTED 2026-01-02 01:00:18.312663 | orchestrator | 2026-01-02 01:00:18 | INFO  | Task 5d5b70fc-e8e9-41c0-b3a1-1554840ffe8b is in state STARTED 2026-01-02 01:00:18.314415 | orchestrator | 2026-01-02 01:00:18 | INFO  | Task 4979e630-3360-4cea-a090-fc1f1ee5eb66 is in state STARTED 2026-01-02 01:00:18.315842 | orchestrator | 2026-01-02 01:00:18 | INFO  | Task 461e371b-b47d-449a-ac0e-aad070881187 is in state STARTED 2026-01-02 01:00:18.317494 | orchestrator | 2026-01-02 01:00:18 | INFO  | Task 252ce284-b578-4556-9078-002a21a79a55 is in state STARTED 2026-01-02 01:00:18.317534 | orchestrator | 2026-01-02 01:00:18 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:00:21.362224 | orchestrator | 2026-01-02 01:00:21 | INFO  | Task d7222188-efef-442c-a1d8-36ff38f7aceb is in state STARTED 2026-01-02 01:00:21.363836 | orchestrator | 2026-01-02 01:00:21 | INFO  | Task 5d5b70fc-e8e9-41c0-b3a1-1554840ffe8b is in state STARTED 2026-01-02 01:00:21.365915 | orchestrator | 2026-01-02 01:00:21 | INFO  | Task 4979e630-3360-4cea-a090-fc1f1ee5eb66 is in state STARTED 2026-01-02 01:00:21.367850 | orchestrator | 2026-01-02 01:00:21 | INFO  | Task 461e371b-b47d-449a-ac0e-aad070881187 is in state STARTED 2026-01-02 01:00:21.369748 | orchestrator | 2026-01-02 01:00:21 | INFO  | Task 252ce284-b578-4556-9078-002a21a79a55 is in state STARTED 2026-01-02 01:00:21.369788 | orchestrator | 2026-01-02 01:00:21 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:00:24.416665 | orchestrator | 2026-01-02 01:00:24 | INFO  | Task d7222188-efef-442c-a1d8-36ff38f7aceb is in state STARTED 2026-01-02 01:00:24.417803 | orchestrator | 2026-01-02 01:00:24 | INFO  | Task 5d5b70fc-e8e9-41c0-b3a1-1554840ffe8b is in state STARTED 2026-01-02 01:00:24.420856 | orchestrator | 2026-01-02 01:00:24 | INFO  | Task 4979e630-3360-4cea-a090-fc1f1ee5eb66 is in state STARTED 2026-01-02 01:00:24.423522 | orchestrator | 2026-01-02 01:00:24 | INFO  | Task 461e371b-b47d-449a-ac0e-aad070881187 is in state STARTED 2026-01-02 01:00:24.425035 | orchestrator | 2026-01-02 01:00:24 | INFO  | Task 252ce284-b578-4556-9078-002a21a79a55 is in state STARTED 2026-01-02 01:00:24.425130 | orchestrator | 2026-01-02 01:00:24 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:00:27.473697 | orchestrator | 2026-01-02 01:00:27 | INFO  | Task d7222188-efef-442c-a1d8-36ff38f7aceb is in state STARTED 2026-01-02 01:00:27.475442 | orchestrator | 2026-01-02 01:00:27 | INFO  | Task 5d5b70fc-e8e9-41c0-b3a1-1554840ffe8b is in state STARTED 2026-01-02 01:00:27.478126 | orchestrator | 2026-01-02 01:00:27 | INFO  | Task 4979e630-3360-4cea-a090-fc1f1ee5eb66 is in state STARTED 2026-01-02 01:00:27.480134 | orchestrator | 2026-01-02 01:00:27 | INFO  | Task 461e371b-b47d-449a-ac0e-aad070881187 is in state STARTED 2026-01-02 01:00:27.481608 | orchestrator | 2026-01-02 01:00:27 | INFO  | Task 252ce284-b578-4556-9078-002a21a79a55 is in state STARTED 2026-01-02 01:00:27.481970 | orchestrator | 2026-01-02 01:00:27 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:00:30.530561 | orchestrator | 2026-01-02 01:00:30 | INFO  | Task e05e112e-a5a1-4a7c-8660-deb2827550bc is in state STARTED 2026-01-02 01:00:30.532981 | orchestrator | 2026-01-02 01:00:30 | INFO  | Task d7222188-efef-442c-a1d8-36ff38f7aceb is in state STARTED 2026-01-02 01:00:30.533859 | orchestrator | 2026-01-02 01:00:30 | INFO  | Task a9bc3f67-9c27-46d2-a30d-adf0342c1520 is in state STARTED 2026-01-02 01:00:30.535375 | orchestrator | 2026-01-02 01:00:30 | INFO  | Task 5d5b70fc-e8e9-41c0-b3a1-1554840ffe8b is in state SUCCESS 2026-01-02 01:00:30.535771 | orchestrator | 2026-01-02 01:00:30.535785 | orchestrator | 2026-01-02 01:00:30.535790 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-02 01:00:30.535795 | orchestrator | 2026-01-02 01:00:30.535799 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-02 01:00:30.535803 | orchestrator | Friday 02 January 2026 00:59:20 +0000 (0:00:00.267) 0:00:00.267 ******** 2026-01-02 01:00:30.535807 | orchestrator | ok: [testbed-node-0] 2026-01-02 01:00:30.535812 | orchestrator | ok: [testbed-node-1] 2026-01-02 01:00:30.535816 | orchestrator | ok: [testbed-node-2] 2026-01-02 01:00:30.535820 | orchestrator | 2026-01-02 01:00:30.535824 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-02 01:00:30.535828 | orchestrator | Friday 02 January 2026 00:59:21 +0000 (0:00:00.325) 0:00:00.593 ******** 2026-01-02 01:00:30.535833 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2026-01-02 01:00:30.535837 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2026-01-02 01:00:30.535841 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2026-01-02 01:00:30.535845 | orchestrator | 2026-01-02 01:00:30.535848 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2026-01-02 01:00:30.535852 | orchestrator | 2026-01-02 01:00:30.535856 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-01-02 01:00:30.535860 | orchestrator | Friday 02 January 2026 00:59:21 +0000 (0:00:00.364) 0:00:00.957 ******** 2026-01-02 01:00:30.535864 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 01:00:30.535869 | orchestrator | 2026-01-02 01:00:30.535872 | orchestrator | TASK [service-ks-register : barbican | Creating/deleting services] ************* 2026-01-02 01:00:30.535887 | orchestrator | Friday 02 January 2026 00:59:21 +0000 (0:00:00.437) 0:00:01.394 ******** 2026-01-02 01:00:30.535892 | orchestrator | FAILED - RETRYING: [testbed-node-0]: barbican | Creating/deleting services (5 retries left). 2026-01-02 01:00:30.535896 | orchestrator | FAILED - RETRYING: [testbed-node-0]: barbican | Creating/deleting services (4 retries left). 2026-01-02 01:00:30.535900 | orchestrator | FAILED - RETRYING: [testbed-node-0]: barbican | Creating/deleting services (3 retries left). 2026-01-02 01:00:30.535904 | orchestrator | FAILED - RETRYING: [testbed-node-0]: barbican | Creating/deleting services (2 retries left). 2026-01-02 01:00:30.535908 | orchestrator | FAILED - RETRYING: [testbed-node-0]: barbican | Creating/deleting services (1 retries left). 2026-01-02 01:00:30.535932 | orchestrator | failed: [testbed-node-0] (item=barbican (key-manager)) => {"action": "openstack.cloud.catalog_service", "ansible_loop_var": "item", "attempts": 5, "changed": false, "item": {"description": "Barbican Key Management Service", "endpoints": [{"interface": "internal", "url": "https://api-int.testbed.osism.xyz:9311"}, {"interface": "public", "url": "https://api.testbed.osism.xyz:9311"}], "name": "barbican", "type": "key-manager"}, "module_stderr": "Failed to discover available identity versions when contacting https://api-int.testbed.osism.xyz:5000. Attempting to parse version from URL.\nTraceback (most recent call last):\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/generic/base.py\", line 136, in _do_create_plugin\n disc = self.get_discovery(\n ^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/base.py\", line 703, in get_discovery\n return discover.get_discovery(\n ^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/discover.py\", line 1742, in get_discovery\n disc = Discover(session, url, authenticated=authenticated)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/discover.py\", line 585, in __init__\n self._data = get_version_data(\n ^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/discover.py\", line 114, in get_version_data\n resp = session.get(url, headers=headers, authenticated=authenticated)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/session.py\", line 1320, in get\n return self.request(url, 'GET', **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/session.py\", line 1118, in request\n raise exceptions.from_response(resp, method, url)\nkeystoneauth1.exceptions.http.ServiceUnavailable: Service Unavailable (HTTP 503)\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File \"/tmp/ansible-tmp-1767315626.9933255-3302-116893349607867/AnsiballZ_catalog_service.py\", line 107, in \n _ansiballz_main()\n File \"/tmp/ansible-tmp-1767315626.9933255-3302-116893349607867/AnsiballZ_catalog_service.py\", line 99, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File \"/tmp/ansible-tmp-1767315626.9933255-3302-116893349607867/AnsiballZ_catalog_service.py\", line 47, in invoke_module\n runpy.run_module(mod_name='ansible_collections.openstack.cloud.plugins.modules.catalog_service', init_globals=dict(_module_fqn='ansible_collections.openstack.cloud.plugins.modules.catalog_service', _modlib_path=modlib_path),\n File \"\", line 226, in run_module\n File \"\", line 98, in _run_module_code\n File \"\", line 88, in _run_code\n File \"/tmp/ansible_openstack.cloud.catalog_service_payload_nvxpjdk1/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 211, in \n File \"/tmp/ansible_openstack.cloud.catalog_service_payload_nvxpjdk1/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 207, in main\n File \"/tmp/ansible_openstack.cloud.catalog_service_payload_nvxpjdk1/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/module_utils/openstack.py\", line 417, in __call__\n File \"/tmp/ansible_openstack.cloud.catalog_service_payload_nvxpjdk1/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 113, in run\n File \"/tmp/ansible_openstack.cloud.catalog_service_payload_nvxpjdk1/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 175, in _find\n File \"/opt/ansible/lib/python3.12/site-packages/openstack/service_description.py\", line 91, in __get__\n proxy = self._make_proxy(instance)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/openstack/service_description.py\", line 289, in _make_proxy\n found_version = temp_adapter.get_api_major_version()\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/adapter.py\", line 403, in get_api_major_version\n return self.session.get_api_major_version(auth or self.auth, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/session.py\", line 1478, in get_api_major_version\n return auth.get_api_major_version(self, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/base.py\", line 573, in get_api_major_version\n data = get_endpoint_data(discover_versions=discover_versions)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/base.py\", line 296, in get_endpoint_data\n service_catalog = self.get_access(session).service_catalog\n ^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/base.py\", line 139, in get_access\n self.auth_ref = self.get_auth_ref(session)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/generic/base.py\", line 221, in get_auth_ref\n plugin = self._do_create_plugin(session)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/generic/base.py\", line 163, in _do_create_plugin\n raise exceptions.DiscoveryFailure(\nkeystoneauth1.exceptions.discovery.DiscoveryFailure: Could not find versioned identity endpoints when attempting to authenticate. Please check that your auth_url is correct. Service Unavailable (HTTP 503)\n", "module_stdout": "", "msg": "MODULE FAILURE: No start of json char found\nSee stdout/stderr for the exact error", "rc": 1} 2026-01-02 01:00:30.535949 | orchestrator | 2026-01-02 01:00:30.535953 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-02 01:00:30.535957 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2026-01-02 01:00:30.535962 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-02 01:00:30.535968 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-02 01:00:30.535972 | orchestrator | 2026-01-02 01:00:30.535976 | orchestrator | 2026-01-02 01:00:30.535980 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-02 01:00:30.535983 | orchestrator | Friday 02 January 2026 01:00:28 +0000 (0:01:06.535) 0:01:07.930 ******** 2026-01-02 01:00:30.535987 | orchestrator | =============================================================================== 2026-01-02 01:00:30.535991 | orchestrator | service-ks-register : barbican | Creating/deleting services ------------ 66.54s 2026-01-02 01:00:30.535995 | orchestrator | barbican : include_tasks ------------------------------------------------ 0.44s 2026-01-02 01:00:30.535999 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.36s 2026-01-02 01:00:30.536003 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.33s 2026-01-02 01:00:30.537416 | orchestrator | 2026-01-02 01:00:30 | INFO  | Task 4979e630-3360-4cea-a090-fc1f1ee5eb66 is in state STARTED 2026-01-02 01:00:30.538869 | orchestrator | 2026-01-02 01:00:30 | INFO  | Task 461e371b-b47d-449a-ac0e-aad070881187 is in state SUCCESS 2026-01-02 01:00:30.539170 | orchestrator | 2026-01-02 01:00:30.539184 | orchestrator | 2026-01-02 01:00:30.539190 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-02 01:00:30.539196 | orchestrator | 2026-01-02 01:00:30.539200 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-02 01:00:30.539205 | orchestrator | Friday 02 January 2026 00:59:20 +0000 (0:00:00.251) 0:00:00.251 ******** 2026-01-02 01:00:30.539210 | orchestrator | ok: [testbed-node-0] 2026-01-02 01:00:30.539216 | orchestrator | ok: [testbed-node-1] 2026-01-02 01:00:30.539220 | orchestrator | ok: [testbed-node-2] 2026-01-02 01:00:30.539225 | orchestrator | 2026-01-02 01:00:30.539230 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-02 01:00:30.539234 | orchestrator | Friday 02 January 2026 00:59:21 +0000 (0:00:00.364) 0:00:00.616 ******** 2026-01-02 01:00:30.539240 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2026-01-02 01:00:30.539245 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2026-01-02 01:00:30.539249 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2026-01-02 01:00:30.539254 | orchestrator | 2026-01-02 01:00:30.539258 | orchestrator | PLAY [Apply role designate] **************************************************** 2026-01-02 01:00:30.539263 | orchestrator | 2026-01-02 01:00:30.539299 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-01-02 01:00:30.539304 | orchestrator | Friday 02 January 2026 00:59:21 +0000 (0:00:00.367) 0:00:00.984 ******** 2026-01-02 01:00:30.539309 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 01:00:30.539314 | orchestrator | 2026-01-02 01:00:30.539318 | orchestrator | TASK [service-ks-register : designate | Creating/deleting services] ************ 2026-01-02 01:00:30.539323 | orchestrator | Friday 02 January 2026 00:59:22 +0000 (0:00:00.499) 0:00:01.483 ******** 2026-01-02 01:00:30.539328 | orchestrator | FAILED - RETRYING: [testbed-node-0]: designate | Creating/deleting services (5 retries left). 2026-01-02 01:00:30.539333 | orchestrator | FAILED - RETRYING: [testbed-node-0]: designate | Creating/deleting services (4 retries left). 2026-01-02 01:00:30.539337 | orchestrator | FAILED - RETRYING: [testbed-node-0]: designate | Creating/deleting services (3 retries left). 2026-01-02 01:00:30.539342 | orchestrator | FAILED - RETRYING: [testbed-node-0]: designate | Creating/deleting services (2 retries left). 2026-01-02 01:00:30.539346 | orchestrator | FAILED - RETRYING: [testbed-node-0]: designate | Creating/deleting services (1 retries left). 2026-01-02 01:00:30.539374 | orchestrator | failed: [testbed-node-0] (item=designate (dns)) => {"action": "openstack.cloud.catalog_service", "ansible_loop_var": "item", "attempts": 5, "changed": false, "item": {"description": "Designate DNS Service", "endpoints": [{"interface": "internal", "url": "https://api-int.testbed.osism.xyz:9001"}, {"interface": "public", "url": "https://api.testbed.osism.xyz:9001"}], "name": "designate", "type": "dns"}, "module_stderr": "Failed to discover available identity versions when contacting https://api-int.testbed.osism.xyz:5000. Attempting to parse version from URL.\nTraceback (most recent call last):\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/generic/base.py\", line 136, in _do_create_plugin\n disc = self.get_discovery(\n ^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/base.py\", line 703, in get_discovery\n return discover.get_discovery(\n ^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/discover.py\", line 1742, in get_discovery\n disc = Discover(session, url, authenticated=authenticated)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/discover.py\", line 585, in __init__\n self._data = get_version_data(\n ^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/discover.py\", line 114, in get_version_data\n resp = session.get(url, headers=headers, authenticated=authenticated)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/session.py\", line 1320, in get\n return self.request(url, 'GET', **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/session.py\", line 1118, in request\n raise exceptions.from_response(resp, method, url)\nkeystoneauth1.exceptions.http.ServiceUnavailable: Service Unavailable (HTTP 503)\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File \"/tmp/ansible-tmp-1767315627.3121006-3321-241033135813729/AnsiballZ_catalog_service.py\", line 107, in \n _ansiballz_main()\n File \"/tmp/ansible-tmp-1767315627.3121006-3321-241033135813729/AnsiballZ_catalog_service.py\", line 99, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File \"/tmp/ansible-tmp-1767315627.3121006-3321-241033135813729/AnsiballZ_catalog_service.py\", line 47, in invoke_module\n runpy.run_module(mod_name='ansible_collections.openstack.cloud.plugins.modules.catalog_service', init_globals=dict(_module_fqn='ansible_collections.openstack.cloud.plugins.modules.catalog_service', _modlib_path=modlib_path),\n File \"\", line 226, in run_module\n File \"\", line 98, in _run_module_code\n File \"\", line 88, in _run_code\n File \"/tmp/ansible_openstack.cloud.catalog_service_payload_pgh2smd0/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 211, in \n File \"/tmp/ansible_openstack.cloud.catalog_service_payload_pgh2smd0/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 207, in main\n File \"/tmp/ansible_openstack.cloud.catalog_service_payload_pgh2smd0/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/module_utils/openstack.py\", line 417, in __call__\n File \"/tmp/ansible_openstack.cloud.catalog_service_payload_pgh2smd0/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 113, in run\n File \"/tmp/ansible_openstack.cloud.catalog_service_payload_pgh2smd0/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 175, in _find\n File \"/opt/ansible/lib/python3.12/site-packages/openstack/service_description.py\", line 91, in __get__\n proxy = self._make_proxy(instance)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/openstack/service_description.py\", line 289, in _make_proxy\n found_version = temp_adapter.get_api_major_version()\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/adapter.py\", line 403, in get_api_major_version\n return self.session.get_api_major_version(auth or self.auth, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/session.py\", line 1478, in get_api_major_version\n return auth.get_api_major_version(self, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/base.py\", line 573, in get_api_major_version\n data = get_endpoint_data(discover_versions=discover_versions)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/base.py\", line 296, in get_endpoint_data\n service_catalog = self.get_access(session).service_catalog\n ^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/base.py\", line 139, in get_access\n self.auth_ref = self.get_auth_ref(session)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/generic/base.py\", line 221, in get_auth_ref\n plugin = self._do_create_plugin(session)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/generic/base.py\", line 163, in _do_create_plugin\n raise exceptions.DiscoveryFailure(\nkeystoneauth1.exceptions.discovery.DiscoveryFailure: Could not find versioned identity endpoints when attempting to authenticate. Please check that your auth_url is correct. Service Unavailable (HTTP 503)\n", "module_stdout": "", "msg": "MODULE FAILURE: No start of json char found\nSee stdout/stderr for the exact error", "rc": 1} 2026-01-02 01:00:30.539386 | orchestrator | 2026-01-02 01:00:30.539394 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-02 01:00:30.539399 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2026-01-02 01:00:30.539405 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-02 01:00:30.539415 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-02 01:00:30.539419 | orchestrator | 2026-01-02 01:00:30.539424 | orchestrator | 2026-01-02 01:00:30.539429 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-02 01:00:30.539433 | orchestrator | Friday 02 January 2026 01:00:28 +0000 (0:01:06.580) 0:01:08.064 ******** 2026-01-02 01:00:30.539438 | orchestrator | =============================================================================== 2026-01-02 01:00:30.539442 | orchestrator | service-ks-register : designate | Creating/deleting services ----------- 66.58s 2026-01-02 01:00:30.539447 | orchestrator | designate : include_tasks ----------------------------------------------- 0.50s 2026-01-02 01:00:30.539452 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.37s 2026-01-02 01:00:30.539456 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.36s 2026-01-02 01:00:30.540404 | orchestrator | 2026-01-02 01:00:30 | INFO  | Task 252ce284-b578-4556-9078-002a21a79a55 is in state STARTED 2026-01-02 01:00:30.540494 | orchestrator | 2026-01-02 01:00:30 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:00:33.587214 | orchestrator | 2026-01-02 01:00:33 | INFO  | Task e05e112e-a5a1-4a7c-8660-deb2827550bc is in state STARTED 2026-01-02 01:00:33.587378 | orchestrator | 2026-01-02 01:00:33 | INFO  | Task d7222188-efef-442c-a1d8-36ff38f7aceb is in state SUCCESS 2026-01-02 01:00:33.588128 | orchestrator | 2026-01-02 01:00:33.588227 | orchestrator | 2026-01-02 01:00:33.588244 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-02 01:00:33.588258 | orchestrator | 2026-01-02 01:00:33.588269 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-02 01:00:33.588307 | orchestrator | Friday 02 January 2026 00:59:20 +0000 (0:00:00.249) 0:00:00.249 ******** 2026-01-02 01:00:33.588319 | orchestrator | ok: [testbed-node-0] 2026-01-02 01:00:33.588332 | orchestrator | ok: [testbed-node-1] 2026-01-02 01:00:33.588343 | orchestrator | ok: [testbed-node-2] 2026-01-02 01:00:33.588354 | orchestrator | ok: [testbed-node-3] 2026-01-02 01:00:33.588364 | orchestrator | ok: [testbed-node-4] 2026-01-02 01:00:33.588376 | orchestrator | ok: [testbed-node-5] 2026-01-02 01:00:33.588386 | orchestrator | 2026-01-02 01:00:33.588398 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-02 01:00:33.588532 | orchestrator | Friday 02 January 2026 00:59:21 +0000 (0:00:00.837) 0:00:01.087 ******** 2026-01-02 01:00:33.588546 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2026-01-02 01:00:33.588557 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2026-01-02 01:00:33.588569 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2026-01-02 01:00:33.588580 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2026-01-02 01:00:33.588591 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2026-01-02 01:00:33.588602 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2026-01-02 01:00:33.588613 | orchestrator | 2026-01-02 01:00:33.588624 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2026-01-02 01:00:33.588635 | orchestrator | 2026-01-02 01:00:33.588645 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-01-02 01:00:33.588657 | orchestrator | Friday 02 January 2026 00:59:22 +0000 (0:00:00.631) 0:00:01.719 ******** 2026-01-02 01:00:33.588669 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-02 01:00:33.588680 | orchestrator | 2026-01-02 01:00:33.588691 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2026-01-02 01:00:33.588702 | orchestrator | Friday 02 January 2026 00:59:23 +0000 (0:00:01.055) 0:00:02.774 ******** 2026-01-02 01:00:33.588713 | orchestrator | ok: [testbed-node-1] 2026-01-02 01:00:33.588753 | orchestrator | ok: [testbed-node-2] 2026-01-02 01:00:33.588765 | orchestrator | ok: [testbed-node-0] 2026-01-02 01:00:33.588776 | orchestrator | ok: [testbed-node-3] 2026-01-02 01:00:33.588786 | orchestrator | ok: [testbed-node-4] 2026-01-02 01:00:33.588798 | orchestrator | ok: [testbed-node-5] 2026-01-02 01:00:33.588809 | orchestrator | 2026-01-02 01:00:33.588820 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2026-01-02 01:00:33.588831 | orchestrator | Friday 02 January 2026 00:59:24 +0000 (0:00:01.261) 0:00:04.036 ******** 2026-01-02 01:00:33.588842 | orchestrator | ok: [testbed-node-0] 2026-01-02 01:00:33.588853 | orchestrator | ok: [testbed-node-1] 2026-01-02 01:00:33.588864 | orchestrator | ok: [testbed-node-2] 2026-01-02 01:00:33.588875 | orchestrator | ok: [testbed-node-3] 2026-01-02 01:00:33.588885 | orchestrator | ok: [testbed-node-4] 2026-01-02 01:00:33.588896 | orchestrator | ok: [testbed-node-5] 2026-01-02 01:00:33.588907 | orchestrator | 2026-01-02 01:00:33.588918 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2026-01-02 01:00:33.588929 | orchestrator | Friday 02 January 2026 00:59:25 +0000 (0:00:01.060) 0:00:05.096 ******** 2026-01-02 01:00:33.588940 | orchestrator | ok: [testbed-node-0] => { 2026-01-02 01:00:33.588952 | orchestrator |  "changed": false, 2026-01-02 01:00:33.588963 | orchestrator |  "msg": "All assertions passed" 2026-01-02 01:00:33.588975 | orchestrator | } 2026-01-02 01:00:33.588986 | orchestrator | ok: [testbed-node-1] => { 2026-01-02 01:00:33.589012 | orchestrator |  "changed": false, 2026-01-02 01:00:33.589023 | orchestrator |  "msg": "All assertions passed" 2026-01-02 01:00:33.589034 | orchestrator | } 2026-01-02 01:00:33.589045 | orchestrator | ok: [testbed-node-2] => { 2026-01-02 01:00:33.589057 | orchestrator |  "changed": false, 2026-01-02 01:00:33.589068 | orchestrator |  "msg": "All assertions passed" 2026-01-02 01:00:33.589181 | orchestrator | } 2026-01-02 01:00:33.589201 | orchestrator | ok: [testbed-node-3] => { 2026-01-02 01:00:33.589215 | orchestrator |  "changed": false, 2026-01-02 01:00:33.589227 | orchestrator |  "msg": "All assertions passed" 2026-01-02 01:00:33.589238 | orchestrator | } 2026-01-02 01:00:33.589249 | orchestrator | ok: [testbed-node-4] => { 2026-01-02 01:00:33.589260 | orchestrator |  "changed": false, 2026-01-02 01:00:33.589297 | orchestrator |  "msg": "All assertions passed" 2026-01-02 01:00:33.589311 | orchestrator | } 2026-01-02 01:00:33.589322 | orchestrator | ok: [testbed-node-5] => { 2026-01-02 01:00:33.589333 | orchestrator |  "changed": false, 2026-01-02 01:00:33.589345 | orchestrator |  "msg": "All assertions passed" 2026-01-02 01:00:33.589356 | orchestrator | } 2026-01-02 01:00:33.589367 | orchestrator | 2026-01-02 01:00:33.589379 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2026-01-02 01:00:33.589391 | orchestrator | Friday 02 January 2026 00:59:26 +0000 (0:00:00.702) 0:00:05.799 ******** 2026-01-02 01:00:33.589402 | orchestrator | skipping: [testbed-node-0] 2026-01-02 01:00:33.589413 | orchestrator | skipping: [testbed-node-1] 2026-01-02 01:00:33.589424 | orchestrator | skipping: [testbed-node-2] 2026-01-02 01:00:33.589435 | orchestrator | skipping: [testbed-node-3] 2026-01-02 01:00:33.589445 | orchestrator | skipping: [testbed-node-4] 2026-01-02 01:00:33.589456 | orchestrator | skipping: [testbed-node-5] 2026-01-02 01:00:33.589467 | orchestrator | 2026-01-02 01:00:33.589479 | orchestrator | TASK [service-ks-register : neutron | Creating/deleting services] ************** 2026-01-02 01:00:33.589490 | orchestrator | Friday 02 January 2026 00:59:27 +0000 (0:00:00.528) 0:00:06.327 ******** 2026-01-02 01:00:33.589501 | orchestrator | FAILED - RETRYING: [testbed-node-0]: neutron | Creating/deleting services (5 retries left). 2026-01-02 01:00:33.589513 | orchestrator | FAILED - RETRYING: [testbed-node-0]: neutron | Creating/deleting services (4 retries left). 2026-01-02 01:00:33.589524 | orchestrator | FAILED - RETRYING: [testbed-node-0]: neutron | Creating/deleting services (3 retries left). 2026-01-02 01:00:33.589536 | orchestrator | FAILED - RETRYING: [testbed-node-0]: neutron | Creating/deleting services (2 retries left). 2026-01-02 01:00:33.589547 | orchestrator | FAILED - RETRYING: [testbed-node-0]: neutron | Creating/deleting services (1 retries left). 2026-01-02 01:00:33.589618 | orchestrator | failed: [testbed-node-0] (item=neutron (network)) => {"action": "openstack.cloud.catalog_service", "ansible_loop_var": "item", "attempts": 5, "changed": false, "item": {"description": "Openstack Networking", "endpoints": [{"interface": "internal", "url": "https://api-int.testbed.osism.xyz:9696"}, {"interface": "public", "url": "https://api.testbed.osism.xyz:9696"}], "name": "neutron", "type": "network"}, "module_stderr": "Failed to discover available identity versions when contacting https://api-int.testbed.osism.xyz:5000. Attempting to parse version from URL.\nTraceback (most recent call last):\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/generic/base.py\", line 136, in _do_create_plugin\n disc = self.get_discovery(\n ^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/base.py\", line 703, in get_discovery\n return discover.get_discovery(\n ^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/discover.py\", line 1742, in get_discovery\n disc = Discover(session, url, authenticated=authenticated)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/discover.py\", line 585, in __init__\n self._data = get_version_data(\n ^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/discover.py\", line 114, in get_version_data\n resp = session.get(url, headers=headers, authenticated=authenticated)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/session.py\", line 1320, in get\n return self.request(url, 'GET', **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/session.py\", line 1118, in request\n raise exceptions.from_response(resp, method, url)\nkeystoneauth1.exceptions.http.ServiceUnavailable: Service Unavailable (HTTP 503)\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File \"/tmp/ansible-tmp-1767315631.0224676-3358-232500080496292/AnsiballZ_catalog_service.py\", line 107, in \n _ansiballz_main()\n File \"/tmp/ansible-tmp-1767315631.0224676-3358-232500080496292/AnsiballZ_catalog_service.py\", line 99, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File \"/tmp/ansible-tmp-1767315631.0224676-3358-232500080496292/AnsiballZ_catalog_service.py\", line 47, in invoke_module\n runpy.run_module(mod_name='ansible_collections.openstack.cloud.plugins.modules.catalog_service', init_globals=dict(_module_fqn='ansible_collections.openstack.cloud.plugins.modules.catalog_service', _modlib_path=modlib_path),\n File \"\", line 226, in run_module\n File \"\", line 98, in _run_module_code\n File \"\", line 88, in _run_code\n File \"/tmp/ansible_openstack.cloud.catalog_service_payload_40son1kk/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 211, in \n File \"/tmp/ansible_openstack.cloud.catalog_service_payload_40son1kk/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 207, in main\n File \"/tmp/ansible_openstack.cloud.catalog_service_payload_40son1kk/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/module_utils/openstack.py\", line 417, in __call__\n File \"/tmp/ansible_openstack.cloud.catalog_service_payload_40son1kk/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 113, in run\n File \"/tmp/ansible_openstack.cloud.catalog_service_payload_40son1kk/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 175, in _find\n File \"/opt/ansible/lib/python3.12/site-packages/openstack/service_description.py\", line 91, in __get__\n proxy = self._make_proxy(instance)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/openstack/service_description.py\", line 289, in _make_proxy\n found_version = temp_adapter.get_api_major_version()\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/adapter.py\", line 403, in get_api_major_version\n return self.session.get_api_major_version(auth or self.auth, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/session.py\", line 1478, in get_api_major_version\n return auth.get_api_major_version(self, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/base.py\", line 573, in get_api_major_version\n data = get_endpoint_data(discover_versions=discover_versions)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/base.py\", line 296, in get_endpoint_data\n service_catalog = self.get_access(session).service_catalog\n ^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/base.py\", line 139, in get_access\n self.auth_ref = self.get_auth_ref(session)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/generic/base.py\", line 221, in get_auth_ref\n plugin = self._do_create_plugin(session)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/generic/base.py\", line 163, in _do_create_plugin\n raise exceptions.DiscoveryFailure(\nkeystoneauth1.exceptions.discovery.DiscoveryFailure: Could not find versioned identity endpoints when attempting to authenticate. Please check that your auth_url is correct. Service Unavailable (HTTP 503)\n", "module_stdout": "", "msg": "MODULE FAILURE: No start of json char found\nSee stdout/stderr for the exact error", "rc": 1} 2026-01-02 01:00:33.589645 | orchestrator | 2026-01-02 01:00:33.589657 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-02 01:00:33.589674 | orchestrator | testbed-node-0 : ok=6  changed=0 unreachable=0 failed=1  skipped=1  rescued=0 ignored=0 2026-01-02 01:00:33.589686 | orchestrator | testbed-node-1 : ok=6  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-02 01:00:33.589698 | orchestrator | testbed-node-2 : ok=6  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-02 01:00:33.589710 | orchestrator | testbed-node-3 : ok=6  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-02 01:00:33.589722 | orchestrator | testbed-node-4 : ok=6  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-02 01:00:33.589734 | orchestrator | testbed-node-5 : ok=6  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-02 01:00:33.589745 | orchestrator | 2026-01-02 01:00:33.589756 | orchestrator | 2026-01-02 01:00:33.589768 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-02 01:00:33.589779 | orchestrator | Friday 02 January 2026 01:00:32 +0000 (0:01:05.368) 0:01:11.696 ******** 2026-01-02 01:00:33.589790 | orchestrator | =============================================================================== 2026-01-02 01:00:33.589808 | orchestrator | service-ks-register : neutron | Creating/deleting services ------------- 65.37s 2026-01-02 01:00:33.589820 | orchestrator | neutron : Get container facts ------------------------------------------- 1.26s 2026-01-02 01:00:33.589831 | orchestrator | neutron : Get container volume facts ------------------------------------ 1.06s 2026-01-02 01:00:33.589842 | orchestrator | neutron : include_tasks ------------------------------------------------- 1.06s 2026-01-02 01:00:33.589854 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.84s 2026-01-02 01:00:33.589865 | orchestrator | neutron : Check for ML2/OVN presence ------------------------------------ 0.70s 2026-01-02 01:00:33.589883 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.63s 2026-01-02 01:00:33.589895 | orchestrator | neutron : Check for ML2/OVS presence ------------------------------------ 0.53s 2026-01-02 01:00:33.589906 | orchestrator | 2026-01-02 01:00:33 | INFO  | Task a9bc3f67-9c27-46d2-a30d-adf0342c1520 is in state STARTED 2026-01-02 01:00:33.589919 | orchestrator | 2026-01-02 01:00:33 | INFO  | Task 4979e630-3360-4cea-a090-fc1f1ee5eb66 is in state STARTED 2026-01-02 01:00:33.591633 | orchestrator | 2026-01-02 01:00:33 | INFO  | Task 252ce284-b578-4556-9078-002a21a79a55 is in state STARTED 2026-01-02 01:00:33.591704 | orchestrator | 2026-01-02 01:00:33 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:00:36.639961 | orchestrator | 2026-01-02 01:00:36 | INFO  | Task e05e112e-a5a1-4a7c-8660-deb2827550bc is in state STARTED 2026-01-02 01:00:36.640606 | orchestrator | 2026-01-02 01:00:36 | INFO  | Task a9bc3f67-9c27-46d2-a30d-adf0342c1520 is in state STARTED 2026-01-02 01:00:36.641836 | orchestrator | 2026-01-02 01:00:36 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:00:36.642856 | orchestrator | 2026-01-02 01:00:36 | INFO  | Task 4979e630-3360-4cea-a090-fc1f1ee5eb66 is in state STARTED 2026-01-02 01:00:36.646316 | orchestrator | 2026-01-02 01:00:36 | INFO  | Task 252ce284-b578-4556-9078-002a21a79a55 is in state STARTED 2026-01-02 01:00:36.646396 | orchestrator | 2026-01-02 01:00:36 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:00:39.679779 | orchestrator | 2026-01-02 01:00:39 | INFO  | Task e05e112e-a5a1-4a7c-8660-deb2827550bc is in state STARTED 2026-01-02 01:00:39.681732 | orchestrator | 2026-01-02 01:00:39 | INFO  | Task a9bc3f67-9c27-46d2-a30d-adf0342c1520 is in state STARTED 2026-01-02 01:00:39.683866 | orchestrator | 2026-01-02 01:00:39 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:00:39.684952 | orchestrator | 2026-01-02 01:00:39 | INFO  | Task 4979e630-3360-4cea-a090-fc1f1ee5eb66 is in state STARTED 2026-01-02 01:00:39.686549 | orchestrator | 2026-01-02 01:00:39 | INFO  | Task 252ce284-b578-4556-9078-002a21a79a55 is in state STARTED 2026-01-02 01:00:39.686598 | orchestrator | 2026-01-02 01:00:39 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:00:42.718625 | orchestrator | 2026-01-02 01:00:42 | INFO  | Task e05e112e-a5a1-4a7c-8660-deb2827550bc is in state STARTED 2026-01-02 01:00:42.720811 | orchestrator | 2026-01-02 01:00:42 | INFO  | Task a9bc3f67-9c27-46d2-a30d-adf0342c1520 is in state STARTED 2026-01-02 01:00:42.723348 | orchestrator | 2026-01-02 01:00:42 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:00:42.726085 | orchestrator | 2026-01-02 01:00:42 | INFO  | Task 4979e630-3360-4cea-a090-fc1f1ee5eb66 is in state STARTED 2026-01-02 01:00:42.728053 | orchestrator | 2026-01-02 01:00:42 | INFO  | Task 252ce284-b578-4556-9078-002a21a79a55 is in state STARTED 2026-01-02 01:00:42.728108 | orchestrator | 2026-01-02 01:00:42 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:00:45.780977 | orchestrator | 2026-01-02 01:00:45 | INFO  | Task e05e112e-a5a1-4a7c-8660-deb2827550bc is in state STARTED 2026-01-02 01:00:45.782100 | orchestrator | 2026-01-02 01:00:45 | INFO  | Task a9bc3f67-9c27-46d2-a30d-adf0342c1520 is in state STARTED 2026-01-02 01:00:45.784414 | orchestrator | 2026-01-02 01:00:45 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:00:45.786159 | orchestrator | 2026-01-02 01:00:45 | INFO  | Task 51a4a66f-16ef-4844-927f-c99466b6c650 is in state STARTED 2026-01-02 01:00:45.787867 | orchestrator | 2026-01-02 01:00:45 | INFO  | Task 4979e630-3360-4cea-a090-fc1f1ee5eb66 is in state STARTED 2026-01-02 01:00:45.790854 | orchestrator | 2026-01-02 01:00:45 | INFO  | Task 252ce284-b578-4556-9078-002a21a79a55 is in state SUCCESS 2026-01-02 01:00:45.790928 | orchestrator | 2026-01-02 01:00:45 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:00:48.844023 | orchestrator | 2026-01-02 01:00:48 | INFO  | Task e05e112e-a5a1-4a7c-8660-deb2827550bc is in state STARTED 2026-01-02 01:00:48.846251 | orchestrator | 2026-01-02 01:00:48 | INFO  | Task a9bc3f67-9c27-46d2-a30d-adf0342c1520 is in state STARTED 2026-01-02 01:00:48.847661 | orchestrator | 2026-01-02 01:00:48 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:00:48.849554 | orchestrator | 2026-01-02 01:00:48 | INFO  | Task 51a4a66f-16ef-4844-927f-c99466b6c650 is in state STARTED 2026-01-02 01:00:48.851619 | orchestrator | 2026-01-02 01:00:48 | INFO  | Task 4979e630-3360-4cea-a090-fc1f1ee5eb66 is in state STARTED 2026-01-02 01:00:48.851688 | orchestrator | 2026-01-02 01:00:48 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:00:51.896544 | orchestrator | 2026-01-02 01:00:51 | INFO  | Task e05e112e-a5a1-4a7c-8660-deb2827550bc is in state STARTED 2026-01-02 01:00:51.897330 | orchestrator | 2026-01-02 01:00:51 | INFO  | Task a9bc3f67-9c27-46d2-a30d-adf0342c1520 is in state STARTED 2026-01-02 01:00:51.899484 | orchestrator | 2026-01-02 01:00:51 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:00:51.901369 | orchestrator | 2026-01-02 01:00:51 | INFO  | Task 51a4a66f-16ef-4844-927f-c99466b6c650 is in state STARTED 2026-01-02 01:00:51.902821 | orchestrator | 2026-01-02 01:00:51 | INFO  | Task 4979e630-3360-4cea-a090-fc1f1ee5eb66 is in state STARTED 2026-01-02 01:00:51.903593 | orchestrator | 2026-01-02 01:00:51 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:00:54.946378 | orchestrator | 2026-01-02 01:00:54 | INFO  | Task e05e112e-a5a1-4a7c-8660-deb2827550bc is in state STARTED 2026-01-02 01:00:54.949702 | orchestrator | 2026-01-02 01:00:54 | INFO  | Task a9bc3f67-9c27-46d2-a30d-adf0342c1520 is in state STARTED 2026-01-02 01:00:54.952575 | orchestrator | 2026-01-02 01:00:54 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:00:54.955174 | orchestrator | 2026-01-02 01:00:54 | INFO  | Task 51a4a66f-16ef-4844-927f-c99466b6c650 is in state STARTED 2026-01-02 01:00:54.957005 | orchestrator | 2026-01-02 01:00:54 | INFO  | Task 4979e630-3360-4cea-a090-fc1f1ee5eb66 is in state STARTED 2026-01-02 01:00:54.957186 | orchestrator | 2026-01-02 01:00:54 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:00:57.996663 | orchestrator | 2026-01-02 01:00:57 | INFO  | Task e05e112e-a5a1-4a7c-8660-deb2827550bc is in state STARTED 2026-01-02 01:00:57.998186 | orchestrator | 2026-01-02 01:00:57 | INFO  | Task a9bc3f67-9c27-46d2-a30d-adf0342c1520 is in state STARTED 2026-01-02 01:00:58.001410 | orchestrator | 2026-01-02 01:00:58 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:00:58.004202 | orchestrator | 2026-01-02 01:00:58 | INFO  | Task 51a4a66f-16ef-4844-927f-c99466b6c650 is in state STARTED 2026-01-02 01:00:58.006592 | orchestrator | 2026-01-02 01:00:58 | INFO  | Task 4979e630-3360-4cea-a090-fc1f1ee5eb66 is in state STARTED 2026-01-02 01:00:58.006639 | orchestrator | 2026-01-02 01:00:58 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:01:01.049834 | orchestrator | 2026-01-02 01:01:01 | INFO  | Task e05e112e-a5a1-4a7c-8660-deb2827550bc is in state STARTED 2026-01-02 01:01:01.052183 | orchestrator | 2026-01-02 01:01:01 | INFO  | Task a9bc3f67-9c27-46d2-a30d-adf0342c1520 is in state STARTED 2026-01-02 01:01:01.055041 | orchestrator | 2026-01-02 01:01:01 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:01:01.056903 | orchestrator | 2026-01-02 01:01:01 | INFO  | Task 51a4a66f-16ef-4844-927f-c99466b6c650 is in state STARTED 2026-01-02 01:01:01.059007 | orchestrator | 2026-01-02 01:01:01 | INFO  | Task 4979e630-3360-4cea-a090-fc1f1ee5eb66 is in state STARTED 2026-01-02 01:01:01.059068 | orchestrator | 2026-01-02 01:01:01 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:01:04.101470 | orchestrator | 2026-01-02 01:01:04 | INFO  | Task e05e112e-a5a1-4a7c-8660-deb2827550bc is in state STARTED 2026-01-02 01:01:04.104303 | orchestrator | 2026-01-02 01:01:04 | INFO  | Task a9bc3f67-9c27-46d2-a30d-adf0342c1520 is in state STARTED 2026-01-02 01:01:04.107176 | orchestrator | 2026-01-02 01:01:04 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:01:04.109426 | orchestrator | 2026-01-02 01:01:04 | INFO  | Task 51a4a66f-16ef-4844-927f-c99466b6c650 is in state STARTED 2026-01-02 01:01:04.111434 | orchestrator | 2026-01-02 01:01:04 | INFO  | Task 4979e630-3360-4cea-a090-fc1f1ee5eb66 is in state STARTED 2026-01-02 01:01:04.111665 | orchestrator | 2026-01-02 01:01:04 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:01:07.165771 | orchestrator | 2026-01-02 01:01:07 | INFO  | Task e05e112e-a5a1-4a7c-8660-deb2827550bc is in state STARTED 2026-01-02 01:01:07.168913 | orchestrator | 2026-01-02 01:01:07 | INFO  | Task a9bc3f67-9c27-46d2-a30d-adf0342c1520 is in state STARTED 2026-01-02 01:01:07.170459 | orchestrator | 2026-01-02 01:01:07 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:01:07.171816 | orchestrator | 2026-01-02 01:01:07 | INFO  | Task 51a4a66f-16ef-4844-927f-c99466b6c650 is in state STARTED 2026-01-02 01:01:07.173366 | orchestrator | 2026-01-02 01:01:07 | INFO  | Task 4979e630-3360-4cea-a090-fc1f1ee5eb66 is in state STARTED 2026-01-02 01:01:07.173473 | orchestrator | 2026-01-02 01:01:07 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:01:10.218715 | orchestrator | 2026-01-02 01:01:10 | INFO  | Task e05e112e-a5a1-4a7c-8660-deb2827550bc is in state STARTED 2026-01-02 01:01:10.221017 | orchestrator | 2026-01-02 01:01:10 | INFO  | Task a9bc3f67-9c27-46d2-a30d-adf0342c1520 is in state STARTED 2026-01-02 01:01:10.222220 | orchestrator | 2026-01-02 01:01:10 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:01:10.223907 | orchestrator | 2026-01-02 01:01:10 | INFO  | Task 51a4a66f-16ef-4844-927f-c99466b6c650 is in state STARTED 2026-01-02 01:01:10.225338 | orchestrator | 2026-01-02 01:01:10 | INFO  | Task 4979e630-3360-4cea-a090-fc1f1ee5eb66 is in state STARTED 2026-01-02 01:01:10.225500 | orchestrator | 2026-01-02 01:01:10 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:01:13.273616 | orchestrator | 2026-01-02 01:01:13 | INFO  | Task e05e112e-a5a1-4a7c-8660-deb2827550bc is in state STARTED 2026-01-02 01:01:13.275922 | orchestrator | 2026-01-02 01:01:13 | INFO  | Task a9bc3f67-9c27-46d2-a30d-adf0342c1520 is in state STARTED 2026-01-02 01:01:13.278606 | orchestrator | 2026-01-02 01:01:13 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:01:13.280621 | orchestrator | 2026-01-02 01:01:13 | INFO  | Task 51a4a66f-16ef-4844-927f-c99466b6c650 is in state STARTED 2026-01-02 01:01:13.282521 | orchestrator | 2026-01-02 01:01:13 | INFO  | Task 4979e630-3360-4cea-a090-fc1f1ee5eb66 is in state STARTED 2026-01-02 01:01:13.282563 | orchestrator | 2026-01-02 01:01:13 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:01:16.327127 | orchestrator | 2026-01-02 01:01:16 | INFO  | Task e05e112e-a5a1-4a7c-8660-deb2827550bc is in state STARTED 2026-01-02 01:01:16.329360 | orchestrator | 2026-01-02 01:01:16 | INFO  | Task a9bc3f67-9c27-46d2-a30d-adf0342c1520 is in state STARTED 2026-01-02 01:01:16.331128 | orchestrator | 2026-01-02 01:01:16 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:01:16.333137 | orchestrator | 2026-01-02 01:01:16 | INFO  | Task 51a4a66f-16ef-4844-927f-c99466b6c650 is in state STARTED 2026-01-02 01:01:16.334909 | orchestrator | 2026-01-02 01:01:16 | INFO  | Task 4979e630-3360-4cea-a090-fc1f1ee5eb66 is in state STARTED 2026-01-02 01:01:16.334956 | orchestrator | 2026-01-02 01:01:16 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:01:19.385366 | orchestrator | 2026-01-02 01:01:19 | INFO  | Task e05e112e-a5a1-4a7c-8660-deb2827550bc is in state STARTED 2026-01-02 01:01:19.387790 | orchestrator | 2026-01-02 01:01:19 | INFO  | Task a9bc3f67-9c27-46d2-a30d-adf0342c1520 is in state STARTED 2026-01-02 01:01:19.390159 | orchestrator | 2026-01-02 01:01:19 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:01:19.392517 | orchestrator | 2026-01-02 01:01:19 | INFO  | Task 51a4a66f-16ef-4844-927f-c99466b6c650 is in state STARTED 2026-01-02 01:01:19.394832 | orchestrator | 2026-01-02 01:01:19 | INFO  | Task 4979e630-3360-4cea-a090-fc1f1ee5eb66 is in state STARTED 2026-01-02 01:01:19.394978 | orchestrator | 2026-01-02 01:01:19 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:01:22.441374 | orchestrator | 2026-01-02 01:01:22 | INFO  | Task e05e112e-a5a1-4a7c-8660-deb2827550bc is in state STARTED 2026-01-02 01:01:22.442391 | orchestrator | 2026-01-02 01:01:22 | INFO  | Task a9bc3f67-9c27-46d2-a30d-adf0342c1520 is in state STARTED 2026-01-02 01:01:22.443916 | orchestrator | 2026-01-02 01:01:22 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:01:22.445484 | orchestrator | 2026-01-02 01:01:22 | INFO  | Task 51a4a66f-16ef-4844-927f-c99466b6c650 is in state STARTED 2026-01-02 01:01:22.447068 | orchestrator | 2026-01-02 01:01:22 | INFO  | Task 4979e630-3360-4cea-a090-fc1f1ee5eb66 is in state STARTED 2026-01-02 01:01:22.447135 | orchestrator | 2026-01-02 01:01:22 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:01:25.500763 | orchestrator | 2026-01-02 01:01:25 | INFO  | Task e05e112e-a5a1-4a7c-8660-deb2827550bc is in state STARTED 2026-01-02 01:01:25.502749 | orchestrator | 2026-01-02 01:01:25 | INFO  | Task a9bc3f67-9c27-46d2-a30d-adf0342c1520 is in state STARTED 2026-01-02 01:01:25.504737 | orchestrator | 2026-01-02 01:01:25 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:01:25.506608 | orchestrator | 2026-01-02 01:01:25 | INFO  | Task 51a4a66f-16ef-4844-927f-c99466b6c650 is in state STARTED 2026-01-02 01:01:25.508040 | orchestrator | 2026-01-02 01:01:25 | INFO  | Task 4979e630-3360-4cea-a090-fc1f1ee5eb66 is in state STARTED 2026-01-02 01:01:25.508086 | orchestrator | 2026-01-02 01:01:25 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:01:28.555520 | orchestrator | 2026-01-02 01:01:28 | INFO  | Task e05e112e-a5a1-4a7c-8660-deb2827550bc is in state STARTED 2026-01-02 01:01:28.557154 | orchestrator | 2026-01-02 01:01:28 | INFO  | Task a9bc3f67-9c27-46d2-a30d-adf0342c1520 is in state STARTED 2026-01-02 01:01:28.559670 | orchestrator | 2026-01-02 01:01:28 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:01:28.561764 | orchestrator | 2026-01-02 01:01:28 | INFO  | Task 51a4a66f-16ef-4844-927f-c99466b6c650 is in state STARTED 2026-01-02 01:01:28.563535 | orchestrator | 2026-01-02 01:01:28 | INFO  | Task 4979e630-3360-4cea-a090-fc1f1ee5eb66 is in state STARTED 2026-01-02 01:01:28.563675 | orchestrator | 2026-01-02 01:01:28 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:01:31.611976 | orchestrator | 2026-01-02 01:01:31 | INFO  | Task e05e112e-a5a1-4a7c-8660-deb2827550bc is in state STARTED 2026-01-02 01:01:31.614353 | orchestrator | 2026-01-02 01:01:31 | INFO  | Task a9bc3f67-9c27-46d2-a30d-adf0342c1520 is in state STARTED 2026-01-02 01:01:31.616765 | orchestrator | 2026-01-02 01:01:31 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:01:31.618307 | orchestrator | 2026-01-02 01:01:31 | INFO  | Task 51a4a66f-16ef-4844-927f-c99466b6c650 is in state STARTED 2026-01-02 01:01:31.620345 | orchestrator | 2026-01-02 01:01:31 | INFO  | Task 4979e630-3360-4cea-a090-fc1f1ee5eb66 is in state STARTED 2026-01-02 01:01:31.620393 | orchestrator | 2026-01-02 01:01:31 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:01:34.670914 | orchestrator | 2026-01-02 01:01:34 | INFO  | Task e05e112e-a5a1-4a7c-8660-deb2827550bc is in state STARTED 2026-01-02 01:01:34.672619 | orchestrator | 2026-01-02 01:01:34 | INFO  | Task a9bc3f67-9c27-46d2-a30d-adf0342c1520 is in state STARTED 2026-01-02 01:01:34.674169 | orchestrator | 2026-01-02 01:01:34 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:01:34.675760 | orchestrator | 2026-01-02 01:01:34 | INFO  | Task 51a4a66f-16ef-4844-927f-c99466b6c650 is in state STARTED 2026-01-02 01:01:34.677395 | orchestrator | 2026-01-02 01:01:34 | INFO  | Task 4979e630-3360-4cea-a090-fc1f1ee5eb66 is in state STARTED 2026-01-02 01:01:34.677437 | orchestrator | 2026-01-02 01:01:34 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:01:37.726111 | orchestrator | 2026-01-02 01:01:37 | INFO  | Task e05e112e-a5a1-4a7c-8660-deb2827550bc is in state STARTED 2026-01-02 01:01:37.727585 | orchestrator | 2026-01-02 01:01:37 | INFO  | Task a9bc3f67-9c27-46d2-a30d-adf0342c1520 is in state STARTED 2026-01-02 01:01:37.729656 | orchestrator | 2026-01-02 01:01:37 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:01:37.731366 | orchestrator | 2026-01-02 01:01:37 | INFO  | Task 51a4a66f-16ef-4844-927f-c99466b6c650 is in state STARTED 2026-01-02 01:01:37.733001 | orchestrator | 2026-01-02 01:01:37 | INFO  | Task 4979e630-3360-4cea-a090-fc1f1ee5eb66 is in state STARTED 2026-01-02 01:01:37.733129 | orchestrator | 2026-01-02 01:01:37 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:01:40.780708 | orchestrator | 2026-01-02 01:01:40 | INFO  | Task e05e112e-a5a1-4a7c-8660-deb2827550bc is in state SUCCESS 2026-01-02 01:01:40.781749 | orchestrator | 2026-01-02 01:01:40.781799 | orchestrator | 2026-01-02 01:01:40.781812 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2026-01-02 01:01:40.781824 | orchestrator | 2026-01-02 01:01:40.781836 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2026-01-02 01:01:40.781847 | orchestrator | Friday 02 January 2026 00:59:48 +0000 (0:00:00.229) 0:00:00.229 ******** 2026-01-02 01:01:40.781858 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2026-01-02 01:01:40.781950 | orchestrator | 2026-01-02 01:01:40.782007 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2026-01-02 01:01:40.782066 | orchestrator | Friday 02 January 2026 00:59:49 +0000 (0:00:00.225) 0:00:00.455 ******** 2026-01-02 01:01:40.782080 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2026-01-02 01:01:40.782091 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2026-01-02 01:01:40.782103 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2026-01-02 01:01:40.782115 | orchestrator | 2026-01-02 01:01:40.782126 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2026-01-02 01:01:40.782136 | orchestrator | Friday 02 January 2026 00:59:50 +0000 (0:00:01.302) 0:00:01.757 ******** 2026-01-02 01:01:40.782148 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2026-01-02 01:01:40.782159 | orchestrator | 2026-01-02 01:01:40.782169 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2026-01-02 01:01:40.782180 | orchestrator | Friday 02 January 2026 00:59:51 +0000 (0:00:01.505) 0:00:03.263 ******** 2026-01-02 01:01:40.782231 | orchestrator | changed: [testbed-manager] 2026-01-02 01:01:40.782243 | orchestrator | 2026-01-02 01:01:40.782256 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2026-01-02 01:01:40.782267 | orchestrator | Friday 02 January 2026 00:59:52 +0000 (0:00:00.917) 0:00:04.181 ******** 2026-01-02 01:01:40.782277 | orchestrator | changed: [testbed-manager] 2026-01-02 01:01:40.782288 | orchestrator | 2026-01-02 01:01:40.782299 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2026-01-02 01:01:40.782310 | orchestrator | Friday 02 January 2026 00:59:53 +0000 (0:00:00.950) 0:00:05.131 ******** 2026-01-02 01:01:40.782321 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2026-01-02 01:01:40.782332 | orchestrator | ok: [testbed-manager] 2026-01-02 01:01:40.782343 | orchestrator | 2026-01-02 01:01:40.782353 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2026-01-02 01:01:40.782364 | orchestrator | Friday 02 January 2026 01:00:35 +0000 (0:00:41.485) 0:00:46.617 ******** 2026-01-02 01:01:40.782375 | orchestrator | changed: [testbed-manager] => (item=ceph) 2026-01-02 01:01:40.782386 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2026-01-02 01:01:40.782397 | orchestrator | changed: [testbed-manager] => (item=rados) 2026-01-02 01:01:40.782408 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2026-01-02 01:01:40.782418 | orchestrator | changed: [testbed-manager] => (item=rbd) 2026-01-02 01:01:40.782429 | orchestrator | 2026-01-02 01:01:40.782440 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2026-01-02 01:01:40.782451 | orchestrator | Friday 02 January 2026 01:00:38 +0000 (0:00:03.459) 0:00:50.076 ******** 2026-01-02 01:01:40.782461 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2026-01-02 01:01:40.782472 | orchestrator | 2026-01-02 01:01:40.782483 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2026-01-02 01:01:40.782493 | orchestrator | Friday 02 January 2026 01:00:39 +0000 (0:00:00.419) 0:00:50.496 ******** 2026-01-02 01:01:40.782504 | orchestrator | skipping: [testbed-manager] 2026-01-02 01:01:40.782515 | orchestrator | 2026-01-02 01:01:40.782525 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2026-01-02 01:01:40.782536 | orchestrator | Friday 02 January 2026 01:00:39 +0000 (0:00:00.125) 0:00:50.621 ******** 2026-01-02 01:01:40.782547 | orchestrator | skipping: [testbed-manager] 2026-01-02 01:01:40.782558 | orchestrator | 2026-01-02 01:01:40.782568 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2026-01-02 01:01:40.782579 | orchestrator | Friday 02 January 2026 01:00:39 +0000 (0:00:00.465) 0:00:51.087 ******** 2026-01-02 01:01:40.782590 | orchestrator | changed: [testbed-manager] 2026-01-02 01:01:40.782610 | orchestrator | 2026-01-02 01:01:40.782621 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2026-01-02 01:01:40.782632 | orchestrator | Friday 02 January 2026 01:00:40 +0000 (0:00:01.335) 0:00:52.423 ******** 2026-01-02 01:01:40.782643 | orchestrator | changed: [testbed-manager] 2026-01-02 01:01:40.782653 | orchestrator | 2026-01-02 01:01:40.782664 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2026-01-02 01:01:40.782675 | orchestrator | Friday 02 January 2026 01:00:41 +0000 (0:00:00.695) 0:00:53.118 ******** 2026-01-02 01:01:40.782700 | orchestrator | changed: [testbed-manager] 2026-01-02 01:01:40.782711 | orchestrator | 2026-01-02 01:01:40.782722 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2026-01-02 01:01:40.782733 | orchestrator | Friday 02 January 2026 01:00:42 +0000 (0:00:00.528) 0:00:53.647 ******** 2026-01-02 01:01:40.782744 | orchestrator | ok: [testbed-manager] => (item=ceph) 2026-01-02 01:01:40.782755 | orchestrator | ok: [testbed-manager] => (item=rados) 2026-01-02 01:01:40.782766 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2026-01-02 01:01:40.782776 | orchestrator | ok: [testbed-manager] => (item=rbd) 2026-01-02 01:01:40.782787 | orchestrator | 2026-01-02 01:01:40.782798 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-02 01:01:40.782809 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-02 01:01:40.782820 | orchestrator | 2026-01-02 01:01:40.782831 | orchestrator | 2026-01-02 01:01:40.782857 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-02 01:01:40.782869 | orchestrator | Friday 02 January 2026 01:00:43 +0000 (0:00:01.379) 0:00:55.026 ******** 2026-01-02 01:01:40.782880 | orchestrator | =============================================================================== 2026-01-02 01:01:40.782891 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 41.49s 2026-01-02 01:01:40.782902 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 3.46s 2026-01-02 01:01:40.782913 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.51s 2026-01-02 01:01:40.782923 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.38s 2026-01-02 01:01:40.782934 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.34s 2026-01-02 01:01:40.782945 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.30s 2026-01-02 01:01:40.782956 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.95s 2026-01-02 01:01:40.782966 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 0.92s 2026-01-02 01:01:40.782977 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.69s 2026-01-02 01:01:40.782988 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.53s 2026-01-02 01:01:40.782999 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.47s 2026-01-02 01:01:40.783009 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.42s 2026-01-02 01:01:40.783020 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.23s 2026-01-02 01:01:40.783031 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.13s 2026-01-02 01:01:40.783042 | orchestrator | 2026-01-02 01:01:40.783053 | orchestrator | 2026-01-02 01:01:40.783063 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-02 01:01:40.783074 | orchestrator | 2026-01-02 01:01:40.783085 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-02 01:01:40.783103 | orchestrator | Friday 02 January 2026 01:00:33 +0000 (0:00:00.245) 0:00:00.245 ******** 2026-01-02 01:01:40.783122 | orchestrator | ok: [testbed-node-0] 2026-01-02 01:01:40.783141 | orchestrator | ok: [testbed-node-1] 2026-01-02 01:01:40.783161 | orchestrator | ok: [testbed-node-2] 2026-01-02 01:01:40.783179 | orchestrator | 2026-01-02 01:01:40.783234 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-02 01:01:40.783254 | orchestrator | Friday 02 January 2026 01:00:33 +0000 (0:00:00.252) 0:00:00.498 ******** 2026-01-02 01:01:40.783272 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2026-01-02 01:01:40.783290 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2026-01-02 01:01:40.783309 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2026-01-02 01:01:40.783328 | orchestrator | 2026-01-02 01:01:40.783348 | orchestrator | PLAY [Apply role placement] **************************************************** 2026-01-02 01:01:40.783362 | orchestrator | 2026-01-02 01:01:40.783381 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-01-02 01:01:40.783400 | orchestrator | Friday 02 January 2026 01:00:33 +0000 (0:00:00.351) 0:00:00.849 ******** 2026-01-02 01:01:40.783416 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 01:01:40.783428 | orchestrator | 2026-01-02 01:01:40.783439 | orchestrator | TASK [service-ks-register : placement | Creating/deleting services] ************ 2026-01-02 01:01:40.783450 | orchestrator | Friday 02 January 2026 01:00:34 +0000 (0:00:00.472) 0:00:01.322 ******** 2026-01-02 01:01:40.783460 | orchestrator | FAILED - RETRYING: [testbed-node-0]: placement | Creating/deleting services (5 retries left). 2026-01-02 01:01:40.783500 | orchestrator | FAILED - RETRYING: [testbed-node-0]: placement | Creating/deleting services (4 retries left). 2026-01-02 01:01:40.783512 | orchestrator | FAILED - RETRYING: [testbed-node-0]: placement | Creating/deleting services (3 retries left). 2026-01-02 01:01:40.783523 | orchestrator | FAILED - RETRYING: [testbed-node-0]: placement | Creating/deleting services (2 retries left). 2026-01-02 01:01:40.783533 | orchestrator | FAILED - RETRYING: [testbed-node-0]: placement | Creating/deleting services (1 retries left). 2026-01-02 01:01:40.783595 | orchestrator | failed: [testbed-node-0] (item=placement (placement)) => {"action": "openstack.cloud.catalog_service", "ansible_loop_var": "item", "attempts": 5, "changed": false, "item": {"description": "Placement Service", "endpoints": [{"interface": "internal", "url": "https://api-int.testbed.osism.xyz:8780"}, {"interface": "public", "url": "https://api.testbed.osism.xyz:8780"}], "name": "placement", "type": "placement"}, "module_stderr": "Failed to discover available identity versions when contacting https://api-int.testbed.osism.xyz:5000. Attempting to parse version from URL.\nTraceback (most recent call last):\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/generic/base.py\", line 136, in _do_create_plugin\n disc = self.get_discovery(\n ^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/base.py\", line 703, in get_discovery\n return discover.get_discovery(\n ^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/discover.py\", line 1742, in get_discovery\n disc = Discover(session, url, authenticated=authenticated)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/discover.py\", line 585, in __init__\n self._data = get_version_data(\n ^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/discover.py\", line 114, in get_version_data\n resp = session.get(url, headers=headers, authenticated=authenticated)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/session.py\", line 1320, in get\n return self.request(url, 'GET', **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/session.py\", line 1118, in request\n raise exceptions.from_response(resp, method, url)\nkeystoneauth1.exceptions.http.ServiceUnavailable: Service Unavailable (HTTP 503)\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File \"/tmp/ansible-tmp-1767315698.4200864-3765-255542791658202/AnsiballZ_catalog_service.py\", line 107, in \n _ansiballz_main()\n File \"/tmp/ansible-tmp-1767315698.4200864-3765-255542791658202/AnsiballZ_catalog_service.py\", line 99, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File \"/tmp/ansible-tmp-1767315698.4200864-3765-255542791658202/AnsiballZ_catalog_service.py\", line 47, in invoke_module\n runpy.run_module(mod_name='ansible_collections.openstack.cloud.plugins.modules.catalog_service', init_globals=dict(_module_fqn='ansible_collections.openstack.cloud.plugins.modules.catalog_service', _modlib_path=modlib_path),\n File \"\", line 226, in run_module\n File \"\", line 98, in _run_module_code\n File \"\", line 88, in _run_code\n File \"/tmp/ansible_openstack.cloud.catalog_service_payload_ld3j9d14/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 211, in \n File \"/tmp/ansible_openstack.cloud.catalog_service_payload_ld3j9d14/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 207, in main\n File \"/tmp/ansible_openstack.cloud.catalog_service_payload_ld3j9d14/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/module_utils/openstack.py\", line 417, in __call__\n File \"/tmp/ansible_openstack.cloud.catalog_service_payload_ld3j9d14/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 113, in run\n File \"/tmp/ansible_openstack.cloud.catalog_service_payload_ld3j9d14/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 175, in _find\n File \"/opt/ansible/lib/python3.12/site-packages/openstack/service_description.py\", line 91, in __get__\n proxy = self._make_proxy(instance)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/openstack/service_description.py\", line 289, in _make_proxy\n found_version = temp_adapter.get_api_major_version()\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/adapter.py\", line 403, in get_api_major_version\n return self.session.get_api_major_version(auth or self.auth, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/session.py\", line 1478, in get_api_major_version\n return auth.get_api_major_version(self, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/base.py\", line 573, in get_api_major_version\n data = get_endpoint_data(discover_versions=discover_versions)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/base.py\", line 296, in get_endpoint_data\n service_catalog = self.get_access(session).service_catalog\n ^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/base.py\", line 139, in get_access\n self.auth_ref = self.get_auth_ref(session)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/generic/base.py\", line 221, in get_auth_ref\n plugin = self._do_create_plugin(session)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/generic/base.py\", line 163, in _do_create_plugin\n raise exceptions.DiscoveryFailure(\nkeystoneauth1.exceptions.discovery.DiscoveryFailure: Could not find versioned identity endpoints when attempting to authenticate. Please check that your auth_url is correct. Service Unavailable (HTTP 503)\n", "module_stdout": "", "msg": "MODULE FAILURE: No start of json char found\nSee stdout/stderr for the exact error", "rc": 1} 2026-01-02 01:01:40.783628 | orchestrator | 2026-01-02 01:01:40.783639 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-02 01:01:40.783651 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2026-01-02 01:01:40.783662 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-02 01:01:40.783674 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-02 01:01:40.783684 | orchestrator | 2026-01-02 01:01:40.783695 | orchestrator | 2026-01-02 01:01:40.783706 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-02 01:01:40.783717 | orchestrator | Friday 02 January 2026 01:01:39 +0000 (0:01:05.752) 0:01:07.074 ******** 2026-01-02 01:01:40.783728 | orchestrator | =============================================================================== 2026-01-02 01:01:40.783739 | orchestrator | service-ks-register : placement | Creating/deleting services ----------- 65.75s 2026-01-02 01:01:40.783750 | orchestrator | placement : include_tasks ----------------------------------------------- 0.47s 2026-01-02 01:01:40.783761 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.35s 2026-01-02 01:01:40.783771 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.25s 2026-01-02 01:01:40.783782 | orchestrator | 2026-01-02 01:01:40 | INFO  | Task a9bc3f67-9c27-46d2-a30d-adf0342c1520 is in state SUCCESS 2026-01-02 01:01:40.783937 | orchestrator | 2026-01-02 01:01:40.783953 | orchestrator | 2026-01-02 01:01:40.783964 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-02 01:01:40.783975 | orchestrator | 2026-01-02 01:01:40.783985 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-02 01:01:40.783996 | orchestrator | Friday 02 January 2026 01:00:33 +0000 (0:00:00.232) 0:00:00.232 ******** 2026-01-02 01:01:40.784007 | orchestrator | ok: [testbed-node-0] 2026-01-02 01:01:40.784018 | orchestrator | ok: [testbed-node-1] 2026-01-02 01:01:40.784029 | orchestrator | ok: [testbed-node-2] 2026-01-02 01:01:40.784039 | orchestrator | 2026-01-02 01:01:40.784050 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-02 01:01:40.784061 | orchestrator | Friday 02 January 2026 01:00:33 +0000 (0:00:00.261) 0:00:00.494 ******** 2026-01-02 01:01:40.784072 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2026-01-02 01:01:40.784083 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2026-01-02 01:01:40.784093 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2026-01-02 01:01:40.784104 | orchestrator | 2026-01-02 01:01:40.784115 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2026-01-02 01:01:40.784126 | orchestrator | 2026-01-02 01:01:40.784137 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-01-02 01:01:40.784153 | orchestrator | Friday 02 January 2026 01:00:33 +0000 (0:00:00.334) 0:00:00.828 ******** 2026-01-02 01:01:40.784164 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 01:01:40.784175 | orchestrator | 2026-01-02 01:01:40.784186 | orchestrator | TASK [service-ks-register : magnum | Creating/deleting services] *************** 2026-01-02 01:01:40.784244 | orchestrator | Friday 02 January 2026 01:00:34 +0000 (0:00:00.453) 0:00:01.282 ******** 2026-01-02 01:01:40.784255 | orchestrator | FAILED - RETRYING: [testbed-node-0]: magnum | Creating/deleting services (5 retries left). 2026-01-02 01:01:40.784266 | orchestrator | FAILED - RETRYING: [testbed-node-0]: magnum | Creating/deleting services (4 retries left). 2026-01-02 01:01:40.784277 | orchestrator | FAILED - RETRYING: [testbed-node-0]: magnum | Creating/deleting services (3 retries left). 2026-01-02 01:01:40.784297 | orchestrator | FAILED - RETRYING: [testbed-node-0]: magnum | Creating/deleting services (2 retries left). 2026-01-02 01:01:40.784308 | orchestrator | FAILED - RETRYING: [testbed-node-0]: magnum | Creating/deleting services (1 retries left). 2026-01-02 01:01:40.784345 | orchestrator | failed: [testbed-node-0] (item=magnum (container-infra)) => {"action": "openstack.cloud.catalog_service", "ansible_loop_var": "item", "attempts": 5, "changed": false, "item": {"description": "Container Infrastructure Management Service", "endpoints": [{"interface": "internal", "url": "https://api-int.testbed.osism.xyz:9511/v1"}, {"interface": "public", "url": "https://api.testbed.osism.xyz:9511/v1"}], "name": "magnum", "type": "container-infra"}, "module_stderr": "Failed to discover available identity versions when contacting https://api-int.testbed.osism.xyz:5000. Attempting to parse version from URL.\nTraceback (most recent call last):\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/generic/base.py\", line 136, in _do_create_plugin\n disc = self.get_discovery(\n ^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/base.py\", line 703, in get_discovery\n return discover.get_discovery(\n ^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/discover.py\", line 1742, in get_discovery\n disc = Discover(session, url, authenticated=authenticated)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/discover.py\", line 585, in __init__\n self._data = get_version_data(\n ^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/discover.py\", line 114, in get_version_data\n resp = session.get(url, headers=headers, authenticated=authenticated)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/session.py\", line 1320, in get\n return self.request(url, 'GET', **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/session.py\", line 1118, in request\n raise exceptions.from_response(resp, method, url)\nkeystoneauth1.exceptions.http.ServiceUnavailable: Service Unavailable (HTTP 503)\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File \"/tmp/ansible-tmp-1767315698.751109-3783-117613366717720/AnsiballZ_catalog_service.py\", line 107, in \n _ansiballz_main()\n File \"/tmp/ansible-tmp-1767315698.751109-3783-117613366717720/AnsiballZ_catalog_service.py\", line 99, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File \"/tmp/ansible-tmp-1767315698.751109-3783-117613366717720/AnsiballZ_catalog_service.py\", line 47, in invoke_module\n runpy.run_module(mod_name='ansible_collections.openstack.cloud.plugins.modules.catalog_service', init_globals=dict(_module_fqn='ansible_collections.openstack.cloud.plugins.modules.catalog_service', _modlib_path=modlib_path),\n File \"\", line 226, in run_module\n File \"\", line 98, in _run_module_code\n File \"\", line 88, in _run_code\n File \"/tmp/ansible_openstack.cloud.catalog_service_payload_cb1nezx7/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 211, in \n File \"/tmp/ansible_openstack.cloud.catalog_service_payload_cb1nezx7/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 207, in main\n File \"/tmp/ansible_openstack.cloud.catalog_service_payload_cb1nezx7/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/module_utils/openstack.py\", line 417, in __call__\n File \"/tmp/ansible_openstack.cloud.catalog_service_payload_cb1nezx7/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 113, in run\n File \"/tmp/ansible_openstack.cloud.catalog_service_payload_cb1nezx7/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 175, in _find\n File \"/opt/ansible/lib/python3.12/site-packages/openstack/service_description.py\", line 91, in __get__\n proxy = self._make_proxy(instance)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/openstack/service_description.py\", line 289, in _make_proxy\n found_version = temp_adapter.get_api_major_version()\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/adapter.py\", line 403, in get_api_major_version\n return self.session.get_api_major_version(auth or self.auth, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/session.py\", line 1478, in get_api_major_version\n return auth.get_api_major_version(self, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/base.py\", line 573, in get_api_major_version\n data = get_endpoint_data(discover_versions=discover_versions)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/base.py\", line 296, in get_endpoint_data\n service_catalog = self.get_access(session).service_catalog\n ^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/base.py\", line 139, in get_access\n self.auth_ref = self.get_auth_ref(session)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/generic/base.py\", line 221, in get_auth_ref\n plugin = self._do_create_plugin(session)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/generic/base.py\", line 163, in _do_create_plugin\n raise exceptions.DiscoveryFailure(\nkeystoneauth1.exceptions.discovery.DiscoveryFailure: Could not find versioned identity endpoints when attempting to authenticate. Please check that your auth_url is correct. Service Unavailable (HTTP 503)\n", "module_stdout": "", "msg": "MODULE FAILURE: No start of json char found\nSee stdout/stderr for the exact error", "rc": 1} 2026-01-02 01:01:40.784366 | orchestrator | 2026-01-02 01:01:40.784377 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-02 01:01:40.784388 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2026-01-02 01:01:40.784399 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-02 01:01:40.784410 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-02 01:01:40.784421 | orchestrator | 2026-01-02 01:01:40.784432 | orchestrator | 2026-01-02 01:01:40.784442 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-02 01:01:40.784453 | orchestrator | Friday 02 January 2026 01:01:40 +0000 (0:01:05.763) 0:01:07.046 ******** 2026-01-02 01:01:40.784464 | orchestrator | =============================================================================== 2026-01-02 01:01:40.784474 | orchestrator | service-ks-register : magnum | Creating/deleting services -------------- 65.76s 2026-01-02 01:01:40.784499 | orchestrator | magnum : include_tasks -------------------------------------------------- 0.45s 2026-01-02 01:01:40.784526 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.33s 2026-01-02 01:01:40.784563 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.26s 2026-01-02 01:01:40.784581 | orchestrator | 2026-01-02 01:01:40 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:01:40.785275 | orchestrator | 2026-01-02 01:01:40 | INFO  | Task 51a4a66f-16ef-4844-927f-c99466b6c650 is in state STARTED 2026-01-02 01:01:40.786985 | orchestrator | 2026-01-02 01:01:40 | INFO  | Task 4979e630-3360-4cea-a090-fc1f1ee5eb66 is in state STARTED 2026-01-02 01:01:40.787099 | orchestrator | 2026-01-02 01:01:40 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:01:43.851394 | orchestrator | 2026-01-02 01:01:43 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:01:43.853243 | orchestrator | 2026-01-02 01:01:43 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:01:43.856806 | orchestrator | 2026-01-02 01:01:43 | INFO  | Task 51a4a66f-16ef-4844-927f-c99466b6c650 is in state STARTED 2026-01-02 01:01:43.858894 | orchestrator | 2026-01-02 01:01:43 | INFO  | Task 4979e630-3360-4cea-a090-fc1f1ee5eb66 is in state STARTED 2026-01-02 01:01:43.861500 | orchestrator | 2026-01-02 01:01:43 | INFO  | Task 1540bd3f-a0a2-4b09-bae7-062602e769fc is in state STARTED 2026-01-02 01:01:43.861592 | orchestrator | 2026-01-02 01:01:43 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:01:46.901388 | orchestrator | 2026-01-02 01:01:46 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:01:46.903984 | orchestrator | 2026-01-02 01:01:46 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:01:46.906486 | orchestrator | 2026-01-02 01:01:46 | INFO  | Task 51a4a66f-16ef-4844-927f-c99466b6c650 is in state STARTED 2026-01-02 01:01:46.908764 | orchestrator | 2026-01-02 01:01:46 | INFO  | Task 4979e630-3360-4cea-a090-fc1f1ee5eb66 is in state STARTED 2026-01-02 01:01:46.910462 | orchestrator | 2026-01-02 01:01:46 | INFO  | Task 1540bd3f-a0a2-4b09-bae7-062602e769fc is in state STARTED 2026-01-02 01:01:46.910583 | orchestrator | 2026-01-02 01:01:46 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:01:49.960922 | orchestrator | 2026-01-02 01:01:49 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:01:49.961049 | orchestrator | 2026-01-02 01:01:49 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:01:49.962248 | orchestrator | 2026-01-02 01:01:49 | INFO  | Task 51a4a66f-16ef-4844-927f-c99466b6c650 is in state STARTED 2026-01-02 01:01:49.963334 | orchestrator | 2026-01-02 01:01:49 | INFO  | Task 4979e630-3360-4cea-a090-fc1f1ee5eb66 is in state STARTED 2026-01-02 01:01:49.964802 | orchestrator | 2026-01-02 01:01:49 | INFO  | Task 1540bd3f-a0a2-4b09-bae7-062602e769fc is in state STARTED 2026-01-02 01:01:49.964836 | orchestrator | 2026-01-02 01:01:49 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:01:53.004419 | orchestrator | 2026-01-02 01:01:53 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:01:53.009152 | orchestrator | 2026-01-02 01:01:53 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:01:53.012386 | orchestrator | 2026-01-02 01:01:53 | INFO  | Task 51a4a66f-16ef-4844-927f-c99466b6c650 is in state STARTED 2026-01-02 01:01:53.015363 | orchestrator | 2026-01-02 01:01:53 | INFO  | Task 4979e630-3360-4cea-a090-fc1f1ee5eb66 is in state STARTED 2026-01-02 01:01:53.020101 | orchestrator | 2026-01-02 01:01:53 | INFO  | Task 1540bd3f-a0a2-4b09-bae7-062602e769fc is in state STARTED 2026-01-02 01:01:53.026426 | orchestrator | 2026-01-02 01:01:53 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:01:56.065517 | orchestrator | 2026-01-02 01:01:56 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:01:56.066273 | orchestrator | 2026-01-02 01:01:56 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:01:56.067561 | orchestrator | 2026-01-02 01:01:56 | INFO  | Task 51a4a66f-16ef-4844-927f-c99466b6c650 is in state STARTED 2026-01-02 01:01:56.069279 | orchestrator | 2026-01-02 01:01:56 | INFO  | Task 4979e630-3360-4cea-a090-fc1f1ee5eb66 is in state STARTED 2026-01-02 01:01:56.071425 | orchestrator | 2026-01-02 01:01:56 | INFO  | Task 1540bd3f-a0a2-4b09-bae7-062602e769fc is in state STARTED 2026-01-02 01:01:56.071631 | orchestrator | 2026-01-02 01:01:56 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:01:59.099127 | orchestrator | 2026-01-02 01:01:59 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:01:59.099459 | orchestrator | 2026-01-02 01:01:59 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:01:59.099978 | orchestrator | 2026-01-02 01:01:59 | INFO  | Task 51a4a66f-16ef-4844-927f-c99466b6c650 is in state STARTED 2026-01-02 01:01:59.100874 | orchestrator | 2026-01-02 01:01:59 | INFO  | Task 4979e630-3360-4cea-a090-fc1f1ee5eb66 is in state STARTED 2026-01-02 01:01:59.102866 | orchestrator | 2026-01-02 01:01:59 | INFO  | Task 1540bd3f-a0a2-4b09-bae7-062602e769fc is in state STARTED 2026-01-02 01:01:59.102937 | orchestrator | 2026-01-02 01:01:59 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:02:02.141036 | orchestrator | 2026-01-02 01:02:02 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:02:02.142987 | orchestrator | 2026-01-02 01:02:02 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:02:02.144545 | orchestrator | 2026-01-02 01:02:02 | INFO  | Task 51a4a66f-16ef-4844-927f-c99466b6c650 is in state STARTED 2026-01-02 01:02:02.146578 | orchestrator | 2026-01-02 01:02:02 | INFO  | Task 4979e630-3360-4cea-a090-fc1f1ee5eb66 is in state STARTED 2026-01-02 01:02:02.148153 | orchestrator | 2026-01-02 01:02:02 | INFO  | Task 1540bd3f-a0a2-4b09-bae7-062602e769fc is in state STARTED 2026-01-02 01:02:02.148294 | orchestrator | 2026-01-02 01:02:02 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:02:05.193878 | orchestrator | 2026-01-02 01:02:05 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:02:05.193976 | orchestrator | 2026-01-02 01:02:05 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:02:05.194299 | orchestrator | 2026-01-02 01:02:05 | INFO  | Task 51a4a66f-16ef-4844-927f-c99466b6c650 is in state STARTED 2026-01-02 01:02:05.196241 | orchestrator | 2026-01-02 01:02:05 | INFO  | Task 4979e630-3360-4cea-a090-fc1f1ee5eb66 is in state STARTED 2026-01-02 01:02:05.196634 | orchestrator | 2026-01-02 01:02:05 | INFO  | Task 1540bd3f-a0a2-4b09-bae7-062602e769fc is in state STARTED 2026-01-02 01:02:05.196803 | orchestrator | 2026-01-02 01:02:05 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:02:08.244377 | orchestrator | 2026-01-02 01:02:08 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:02:08.246199 | orchestrator | 2026-01-02 01:02:08 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:02:08.248485 | orchestrator | 2026-01-02 01:02:08 | INFO  | Task 51a4a66f-16ef-4844-927f-c99466b6c650 is in state STARTED 2026-01-02 01:02:08.250133 | orchestrator | 2026-01-02 01:02:08 | INFO  | Task 4979e630-3360-4cea-a090-fc1f1ee5eb66 is in state STARTED 2026-01-02 01:02:08.251840 | orchestrator | 2026-01-02 01:02:08 | INFO  | Task 1540bd3f-a0a2-4b09-bae7-062602e769fc is in state STARTED 2026-01-02 01:02:08.251875 | orchestrator | 2026-01-02 01:02:08 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:02:11.302154 | orchestrator | 2026-01-02 01:02:11 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:02:11.303497 | orchestrator | 2026-01-02 01:02:11 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:02:11.305212 | orchestrator | 2026-01-02 01:02:11 | INFO  | Task 51a4a66f-16ef-4844-927f-c99466b6c650 is in state STARTED 2026-01-02 01:02:11.307420 | orchestrator | 2026-01-02 01:02:11 | INFO  | Task 4979e630-3360-4cea-a090-fc1f1ee5eb66 is in state STARTED 2026-01-02 01:02:11.308691 | orchestrator | 2026-01-02 01:02:11 | INFO  | Task 1540bd3f-a0a2-4b09-bae7-062602e769fc is in state STARTED 2026-01-02 01:02:11.308944 | orchestrator | 2026-01-02 01:02:11 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:02:14.387559 | orchestrator | 2026-01-02 01:02:14 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:02:14.392503 | orchestrator | 2026-01-02 01:02:14 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:02:14.392548 | orchestrator | 2026-01-02 01:02:14 | INFO  | Task 51a4a66f-16ef-4844-927f-c99466b6c650 is in state STARTED 2026-01-02 01:02:14.394644 | orchestrator | 2026-01-02 01:02:14 | INFO  | Task 4979e630-3360-4cea-a090-fc1f1ee5eb66 is in state STARTED 2026-01-02 01:02:14.396398 | orchestrator | 2026-01-02 01:02:14 | INFO  | Task 1540bd3f-a0a2-4b09-bae7-062602e769fc is in state STARTED 2026-01-02 01:02:14.396617 | orchestrator | 2026-01-02 01:02:14 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:02:17.467959 | orchestrator | 2026-01-02 01:02:17 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:02:17.470439 | orchestrator | 2026-01-02 01:02:17 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:02:17.473026 | orchestrator | 2026-01-02 01:02:17 | INFO  | Task 51a4a66f-16ef-4844-927f-c99466b6c650 is in state STARTED 2026-01-02 01:02:17.474678 | orchestrator | 2026-01-02 01:02:17 | INFO  | Task 4979e630-3360-4cea-a090-fc1f1ee5eb66 is in state STARTED 2026-01-02 01:02:17.476442 | orchestrator | 2026-01-02 01:02:17 | INFO  | Task 1540bd3f-a0a2-4b09-bae7-062602e769fc is in state STARTED 2026-01-02 01:02:17.476497 | orchestrator | 2026-01-02 01:02:17 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:02:20.535482 | orchestrator | 2026-01-02 01:02:20 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:02:20.535597 | orchestrator | 2026-01-02 01:02:20 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:02:20.536743 | orchestrator | 2026-01-02 01:02:20 | INFO  | Task 51a4a66f-16ef-4844-927f-c99466b6c650 is in state STARTED 2026-01-02 01:02:20.537901 | orchestrator | 2026-01-02 01:02:20 | INFO  | Task 4979e630-3360-4cea-a090-fc1f1ee5eb66 is in state STARTED 2026-01-02 01:02:20.540478 | orchestrator | 2026-01-02 01:02:20 | INFO  | Task 1540bd3f-a0a2-4b09-bae7-062602e769fc is in state STARTED 2026-01-02 01:02:20.540844 | orchestrator | 2026-01-02 01:02:20 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:02:23.598773 | orchestrator | 2026-01-02 01:02:23 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:02:23.599609 | orchestrator | 2026-01-02 01:02:23 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:02:23.601305 | orchestrator | 2026-01-02 01:02:23 | INFO  | Task 51a4a66f-16ef-4844-927f-c99466b6c650 is in state SUCCESS 2026-01-02 01:02:23.602885 | orchestrator | 2026-01-02 01:02:23 | INFO  | Task 4979e630-3360-4cea-a090-fc1f1ee5eb66 is in state STARTED 2026-01-02 01:02:23.604741 | orchestrator | 2026-01-02 01:02:23 | INFO  | Task 1540bd3f-a0a2-4b09-bae7-062602e769fc is in state STARTED 2026-01-02 01:02:23.604777 | orchestrator | 2026-01-02 01:02:23 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:02:26.664981 | orchestrator | 2026-01-02 01:02:26 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:02:26.666002 | orchestrator | 2026-01-02 01:02:26 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:02:26.667783 | orchestrator | 2026-01-02 01:02:26 | INFO  | Task 4979e630-3360-4cea-a090-fc1f1ee5eb66 is in state STARTED 2026-01-02 01:02:26.669639 | orchestrator | 2026-01-02 01:02:26 | INFO  | Task 1540bd3f-a0a2-4b09-bae7-062602e769fc is in state STARTED 2026-01-02 01:02:26.669692 | orchestrator | 2026-01-02 01:02:26 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:02:29.714654 | orchestrator | 2026-01-02 01:02:29 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:02:29.716634 | orchestrator | 2026-01-02 01:02:29 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:02:29.718379 | orchestrator | 2026-01-02 01:02:29 | INFO  | Task 4979e630-3360-4cea-a090-fc1f1ee5eb66 is in state STARTED 2026-01-02 01:02:29.720015 | orchestrator | 2026-01-02 01:02:29 | INFO  | Task 1540bd3f-a0a2-4b09-bae7-062602e769fc is in state STARTED 2026-01-02 01:02:29.720318 | orchestrator | 2026-01-02 01:02:29 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:02:32.768419 | orchestrator | 2026-01-02 01:02:32 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:02:32.770733 | orchestrator | 2026-01-02 01:02:32 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:02:32.773421 | orchestrator | 2026-01-02 01:02:32 | INFO  | Task 4979e630-3360-4cea-a090-fc1f1ee5eb66 is in state STARTED 2026-01-02 01:02:32.778470 | orchestrator | 2026-01-02 01:02:32 | INFO  | Task 1540bd3f-a0a2-4b09-bae7-062602e769fc is in state STARTED 2026-01-02 01:02:32.778521 | orchestrator | 2026-01-02 01:02:32 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:02:35.822637 | orchestrator | 2026-01-02 01:02:35 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:02:35.823481 | orchestrator | 2026-01-02 01:02:35 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:02:35.824704 | orchestrator | 2026-01-02 01:02:35 | INFO  | Task 4979e630-3360-4cea-a090-fc1f1ee5eb66 is in state STARTED 2026-01-02 01:02:35.826244 | orchestrator | 2026-01-02 01:02:35 | INFO  | Task 1540bd3f-a0a2-4b09-bae7-062602e769fc is in state STARTED 2026-01-02 01:02:35.826266 | orchestrator | 2026-01-02 01:02:35 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:02:38.874298 | orchestrator | 2026-01-02 01:02:38 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:02:38.874378 | orchestrator | 2026-01-02 01:02:38 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:02:38.874747 | orchestrator | 2026-01-02 01:02:38 | INFO  | Task 4979e630-3360-4cea-a090-fc1f1ee5eb66 is in state STARTED 2026-01-02 01:02:38.876492 | orchestrator | 2026-01-02 01:02:38 | INFO  | Task 1540bd3f-a0a2-4b09-bae7-062602e769fc is in state STARTED 2026-01-02 01:02:38.876529 | orchestrator | 2026-01-02 01:02:38 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:02:41.921624 | orchestrator | 2026-01-02 01:02:41 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:02:41.924314 | orchestrator | 2026-01-02 01:02:41 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:02:41.927676 | orchestrator | 2026-01-02 01:02:41 | INFO  | Task 4979e630-3360-4cea-a090-fc1f1ee5eb66 is in state STARTED 2026-01-02 01:02:41.929777 | orchestrator | 2026-01-02 01:02:41 | INFO  | Task 1540bd3f-a0a2-4b09-bae7-062602e769fc is in state STARTED 2026-01-02 01:02:41.929821 | orchestrator | 2026-01-02 01:02:41 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:02:44.972586 | orchestrator | 2026-01-02 01:02:44 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:02:44.973336 | orchestrator | 2026-01-02 01:02:44 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:02:44.975382 | orchestrator | 2026-01-02 01:02:44 | INFO  | Task 4979e630-3360-4cea-a090-fc1f1ee5eb66 is in state STARTED 2026-01-02 01:02:44.978536 | orchestrator | 2026-01-02 01:02:44 | INFO  | Task 1540bd3f-a0a2-4b09-bae7-062602e769fc is in state STARTED 2026-01-02 01:02:44.978643 | orchestrator | 2026-01-02 01:02:44 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:02:48.031147 | orchestrator | 2026-01-02 01:02:48 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:02:48.031537 | orchestrator | 2026-01-02 01:02:48 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:02:48.033247 | orchestrator | 2026-01-02 01:02:48 | INFO  | Task 4979e630-3360-4cea-a090-fc1f1ee5eb66 is in state STARTED 2026-01-02 01:02:48.034300 | orchestrator | 2026-01-02 01:02:48 | INFO  | Task 1540bd3f-a0a2-4b09-bae7-062602e769fc is in state STARTED 2026-01-02 01:02:48.034661 | orchestrator | 2026-01-02 01:02:48 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:02:51.073591 | orchestrator | 2026-01-02 01:02:51 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:02:51.075758 | orchestrator | 2026-01-02 01:02:51 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:02:51.078500 | orchestrator | 2026-01-02 01:02:51 | INFO  | Task 4979e630-3360-4cea-a090-fc1f1ee5eb66 is in state STARTED 2026-01-02 01:02:51.080619 | orchestrator | 2026-01-02 01:02:51 | INFO  | Task 1540bd3f-a0a2-4b09-bae7-062602e769fc is in state STARTED 2026-01-02 01:02:51.080701 | orchestrator | 2026-01-02 01:02:51 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:02:54.130502 | orchestrator | 2026-01-02 01:02:54 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:02:54.133431 | orchestrator | 2026-01-02 01:02:54 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:02:54.135520 | orchestrator | 2026-01-02 01:02:54 | INFO  | Task 4979e630-3360-4cea-a090-fc1f1ee5eb66 is in state STARTED 2026-01-02 01:02:54.138364 | orchestrator | 2026-01-02 01:02:54 | INFO  | Task 1540bd3f-a0a2-4b09-bae7-062602e769fc is in state STARTED 2026-01-02 01:02:54.138398 | orchestrator | 2026-01-02 01:02:54 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:02:57.172865 | orchestrator | 2026-01-02 01:02:57 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:02:57.174112 | orchestrator | 2026-01-02 01:02:57 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:02:57.176105 | orchestrator | 2026-01-02 01:02:57 | INFO  | Task 4979e630-3360-4cea-a090-fc1f1ee5eb66 is in state STARTED 2026-01-02 01:02:57.177343 | orchestrator | 2026-01-02 01:02:57 | INFO  | Task 1540bd3f-a0a2-4b09-bae7-062602e769fc is in state STARTED 2026-01-02 01:02:57.177378 | orchestrator | 2026-01-02 01:02:57 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:03:00.219619 | orchestrator | 2026-01-02 01:03:00 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:03:00.221023 | orchestrator | 2026-01-02 01:03:00 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:03:00.222123 | orchestrator | 2026-01-02 01:03:00 | INFO  | Task 4979e630-3360-4cea-a090-fc1f1ee5eb66 is in state STARTED 2026-01-02 01:03:00.223805 | orchestrator | 2026-01-02 01:03:00 | INFO  | Task 1540bd3f-a0a2-4b09-bae7-062602e769fc is in state STARTED 2026-01-02 01:03:00.223845 | orchestrator | 2026-01-02 01:03:00 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:03:03.257982 | orchestrator | 2026-01-02 01:03:03 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:03:03.258470 | orchestrator | 2026-01-02 01:03:03 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:03:03.259540 | orchestrator | 2026-01-02 01:03:03 | INFO  | Task 4979e630-3360-4cea-a090-fc1f1ee5eb66 is in state STARTED 2026-01-02 01:03:03.260548 | orchestrator | 2026-01-02 01:03:03 | INFO  | Task 1540bd3f-a0a2-4b09-bae7-062602e769fc is in state STARTED 2026-01-02 01:03:03.260579 | orchestrator | 2026-01-02 01:03:03 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:03:06.299770 | orchestrator | 2026-01-02 01:03:06 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:03:06.301962 | orchestrator | 2026-01-02 01:03:06 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:03:06.302700 | orchestrator | 2026-01-02 01:03:06 | INFO  | Task 4979e630-3360-4cea-a090-fc1f1ee5eb66 is in state STARTED 2026-01-02 01:03:06.304455 | orchestrator | 2026-01-02 01:03:06 | INFO  | Task 1540bd3f-a0a2-4b09-bae7-062602e769fc is in state STARTED 2026-01-02 01:03:06.304494 | orchestrator | 2026-01-02 01:03:06 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:03:09.350418 | orchestrator | 2026-01-02 01:03:09 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:03:09.350536 | orchestrator | 2026-01-02 01:03:09 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:03:09.351685 | orchestrator | 2026-01-02 01:03:09 | INFO  | Task 4979e630-3360-4cea-a090-fc1f1ee5eb66 is in state STARTED 2026-01-02 01:03:09.352826 | orchestrator | 2026-01-02 01:03:09 | INFO  | Task 1540bd3f-a0a2-4b09-bae7-062602e769fc is in state STARTED 2026-01-02 01:03:09.353041 | orchestrator | 2026-01-02 01:03:09 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:03:12.400130 | orchestrator | 2026-01-02 01:03:12 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:03:12.403332 | orchestrator | 2026-01-02 01:03:12 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:03:12.406169 | orchestrator | 2026-01-02 01:03:12 | INFO  | Task 4979e630-3360-4cea-a090-fc1f1ee5eb66 is in state STARTED 2026-01-02 01:03:12.410179 | orchestrator | 2026-01-02 01:03:12 | INFO  | Task 1540bd3f-a0a2-4b09-bae7-062602e769fc is in state STARTED 2026-01-02 01:03:12.410257 | orchestrator | 2026-01-02 01:03:12 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:03:15.456341 | orchestrator | 2026-01-02 01:03:15 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:03:15.457662 | orchestrator | 2026-01-02 01:03:15 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:03:15.459533 | orchestrator | 2026-01-02 01:03:15 | INFO  | Task 4979e630-3360-4cea-a090-fc1f1ee5eb66 is in state STARTED 2026-01-02 01:03:15.461472 | orchestrator | 2026-01-02 01:03:15 | INFO  | Task 1540bd3f-a0a2-4b09-bae7-062602e769fc is in state STARTED 2026-01-02 01:03:15.461528 | orchestrator | 2026-01-02 01:03:15 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:03:18.497854 | orchestrator | 2026-01-02 01:03:18 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:03:18.498100 | orchestrator | 2026-01-02 01:03:18 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:03:18.498314 | orchestrator | 2026-01-02 01:03:18 | INFO  | Task 4979e630-3360-4cea-a090-fc1f1ee5eb66 is in state STARTED 2026-01-02 01:03:18.499073 | orchestrator | 2026-01-02 01:03:18 | INFO  | Task 1540bd3f-a0a2-4b09-bae7-062602e769fc is in state STARTED 2026-01-02 01:03:18.499099 | orchestrator | 2026-01-02 01:03:18 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:03:21.529365 | orchestrator | 2026-01-02 01:03:21 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:03:21.532041 | orchestrator | 2026-01-02 01:03:21 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:03:21.535699 | orchestrator | 2026-01-02 01:03:21 | INFO  | Task 4979e630-3360-4cea-a090-fc1f1ee5eb66 is in state STARTED 2026-01-02 01:03:21.537528 | orchestrator | 2026-01-02 01:03:21 | INFO  | Task 1540bd3f-a0a2-4b09-bae7-062602e769fc is in state STARTED 2026-01-02 01:03:21.537576 | orchestrator | 2026-01-02 01:03:21 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:03:24.568731 | orchestrator | 2026-01-02 01:03:24 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:03:24.570130 | orchestrator | 2026-01-02 01:03:24 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:03:24.572142 | orchestrator | 2026-01-02 01:03:24 | INFO  | Task 4979e630-3360-4cea-a090-fc1f1ee5eb66 is in state STARTED 2026-01-02 01:03:24.573883 | orchestrator | 2026-01-02 01:03:24 | INFO  | Task 1540bd3f-a0a2-4b09-bae7-062602e769fc is in state STARTED 2026-01-02 01:03:24.574117 | orchestrator | 2026-01-02 01:03:24 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:03:27.613031 | orchestrator | 2026-01-02 01:03:27 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:03:27.614888 | orchestrator | 2026-01-02 01:03:27 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:03:27.617276 | orchestrator | 2026-01-02 01:03:27 | INFO  | Task 4979e630-3360-4cea-a090-fc1f1ee5eb66 is in state STARTED 2026-01-02 01:03:27.619407 | orchestrator | 2026-01-02 01:03:27 | INFO  | Task 1540bd3f-a0a2-4b09-bae7-062602e769fc is in state STARTED 2026-01-02 01:03:27.619914 | orchestrator | 2026-01-02 01:03:27 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:03:30.665517 | orchestrator | 2026-01-02 01:03:30 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:03:30.666859 | orchestrator | 2026-01-02 01:03:30 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:03:30.669193 | orchestrator | 2026-01-02 01:03:30 | INFO  | Task 4979e630-3360-4cea-a090-fc1f1ee5eb66 is in state STARTED 2026-01-02 01:03:30.671891 | orchestrator | 2026-01-02 01:03:30 | INFO  | Task 1540bd3f-a0a2-4b09-bae7-062602e769fc is in state STARTED 2026-01-02 01:03:30.672085 | orchestrator | 2026-01-02 01:03:30 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:03:33.711903 | orchestrator | 2026-01-02 01:03:33 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:03:33.713386 | orchestrator | 2026-01-02 01:03:33 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:03:33.715965 | orchestrator | 2026-01-02 01:03:33 | INFO  | Task 4979e630-3360-4cea-a090-fc1f1ee5eb66 is in state STARTED 2026-01-02 01:03:33.718360 | orchestrator | 2026-01-02 01:03:33 | INFO  | Task 1540bd3f-a0a2-4b09-bae7-062602e769fc is in state STARTED 2026-01-02 01:03:33.718435 | orchestrator | 2026-01-02 01:03:33 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:03:36.756293 | orchestrator | 2026-01-02 01:03:36 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:03:36.758538 | orchestrator | 2026-01-02 01:03:36 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:03:36.760848 | orchestrator | 2026-01-02 01:03:36 | INFO  | Task 4979e630-3360-4cea-a090-fc1f1ee5eb66 is in state STARTED 2026-01-02 01:03:36.763266 | orchestrator | 2026-01-02 01:03:36 | INFO  | Task 1540bd3f-a0a2-4b09-bae7-062602e769fc is in state STARTED 2026-01-02 01:03:36.763368 | orchestrator | 2026-01-02 01:03:36 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:03:39.803026 | orchestrator | 2026-01-02 01:03:39 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:03:39.804934 | orchestrator | 2026-01-02 01:03:39 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:03:39.806727 | orchestrator | 2026-01-02 01:03:39 | INFO  | Task 4979e630-3360-4cea-a090-fc1f1ee5eb66 is in state STARTED 2026-01-02 01:03:39.808757 | orchestrator | 2026-01-02 01:03:39 | INFO  | Task 1540bd3f-a0a2-4b09-bae7-062602e769fc is in state STARTED 2026-01-02 01:03:39.808794 | orchestrator | 2026-01-02 01:03:39 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:03:42.855264 | orchestrator | 2026-01-02 01:03:42 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:03:42.856922 | orchestrator | 2026-01-02 01:03:42 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:03:42.860154 | orchestrator | 2026-01-02 01:03:42 | INFO  | Task 4979e630-3360-4cea-a090-fc1f1ee5eb66 is in state STARTED 2026-01-02 01:03:42.860202 | orchestrator | 2026-01-02 01:03:42 | INFO  | Task 1540bd3f-a0a2-4b09-bae7-062602e769fc is in state STARTED 2026-01-02 01:03:42.860209 | orchestrator | 2026-01-02 01:03:42 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:03:45.893745 | orchestrator | 2026-01-02 01:03:45 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:03:45.894212 | orchestrator | 2026-01-02 01:03:45 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:03:45.895243 | orchestrator | 2026-01-02 01:03:45 | INFO  | Task 4979e630-3360-4cea-a090-fc1f1ee5eb66 is in state STARTED 2026-01-02 01:03:45.896240 | orchestrator | 2026-01-02 01:03:45 | INFO  | Task 1540bd3f-a0a2-4b09-bae7-062602e769fc is in state STARTED 2026-01-02 01:03:45.896370 | orchestrator | 2026-01-02 01:03:45 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:03:48.929561 | orchestrator | 2026-01-02 01:03:48 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:03:48.930302 | orchestrator | 2026-01-02 01:03:48 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:03:48.931797 | orchestrator | 2026-01-02 01:03:48 | INFO  | Task 4979e630-3360-4cea-a090-fc1f1ee5eb66 is in state STARTED 2026-01-02 01:03:48.933255 | orchestrator | 2026-01-02 01:03:48 | INFO  | Task 1540bd3f-a0a2-4b09-bae7-062602e769fc is in state STARTED 2026-01-02 01:03:48.933284 | orchestrator | 2026-01-02 01:03:48 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:03:51.981583 | orchestrator | 2026-01-02 01:03:51 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:03:51.981703 | orchestrator | 2026-01-02 01:03:51 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:03:51.984030 | orchestrator | 2026-01-02 01:03:51 | INFO  | Task 4979e630-3360-4cea-a090-fc1f1ee5eb66 is in state STARTED 2026-01-02 01:03:51.984932 | orchestrator | 2026-01-02 01:03:51 | INFO  | Task 1540bd3f-a0a2-4b09-bae7-062602e769fc is in state STARTED 2026-01-02 01:03:51.985031 | orchestrator | 2026-01-02 01:03:51 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:03:55.036928 | orchestrator | 2026-01-02 01:03:55 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:03:55.038974 | orchestrator | 2026-01-02 01:03:55 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:03:55.042481 | orchestrator | 2026-01-02 01:03:55 | INFO  | Task 4979e630-3360-4cea-a090-fc1f1ee5eb66 is in state STARTED 2026-01-02 01:03:55.044599 | orchestrator | 2026-01-02 01:03:55 | INFO  | Task 1540bd3f-a0a2-4b09-bae7-062602e769fc is in state STARTED 2026-01-02 01:03:55.044634 | orchestrator | 2026-01-02 01:03:55 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:03:58.081051 | orchestrator | 2026-01-02 01:03:58 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:03:58.082957 | orchestrator | 2026-01-02 01:03:58 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:03:58.084557 | orchestrator | 2026-01-02 01:03:58 | INFO  | Task 4979e630-3360-4cea-a090-fc1f1ee5eb66 is in state STARTED 2026-01-02 01:03:58.087086 | orchestrator | 2026-01-02 01:03:58 | INFO  | Task 1540bd3f-a0a2-4b09-bae7-062602e769fc is in state STARTED 2026-01-02 01:03:58.087634 | orchestrator | 2026-01-02 01:03:58 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:04:01.128020 | orchestrator | 2026-01-02 01:04:01 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:04:01.130727 | orchestrator | 2026-01-02 01:04:01 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:04:01.131818 | orchestrator | 2026-01-02 01:04:01 | INFO  | Task 4979e630-3360-4cea-a090-fc1f1ee5eb66 is in state STARTED 2026-01-02 01:04:01.133224 | orchestrator | 2026-01-02 01:04:01 | INFO  | Task 1540bd3f-a0a2-4b09-bae7-062602e769fc is in state STARTED 2026-01-02 01:04:01.133283 | orchestrator | 2026-01-02 01:04:01 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:04:04.172240 | orchestrator | 2026-01-02 01:04:04 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:04:04.172591 | orchestrator | 2026-01-02 01:04:04 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:04:04.173710 | orchestrator | 2026-01-02 01:04:04 | INFO  | Task 4979e630-3360-4cea-a090-fc1f1ee5eb66 is in state STARTED 2026-01-02 01:04:04.175143 | orchestrator | 2026-01-02 01:04:04 | INFO  | Task 1540bd3f-a0a2-4b09-bae7-062602e769fc is in state STARTED 2026-01-02 01:04:04.175188 | orchestrator | 2026-01-02 01:04:04 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:04:07.222121 | orchestrator | 2026-01-02 01:04:07 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:04:07.223458 | orchestrator | 2026-01-02 01:04:07 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:04:07.225598 | orchestrator | 2026-01-02 01:04:07 | INFO  | Task 4979e630-3360-4cea-a090-fc1f1ee5eb66 is in state STARTED 2026-01-02 01:04:07.227777 | orchestrator | 2026-01-02 01:04:07 | INFO  | Task 1540bd3f-a0a2-4b09-bae7-062602e769fc is in state STARTED 2026-01-02 01:04:07.227864 | orchestrator | 2026-01-02 01:04:07 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:04:10.274218 | orchestrator | 2026-01-02 01:04:10 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:04:10.274697 | orchestrator | 2026-01-02 01:04:10 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:04:10.274717 | orchestrator | 2026-01-02 01:04:10 | INFO  | Task 4979e630-3360-4cea-a090-fc1f1ee5eb66 is in state STARTED 2026-01-02 01:04:10.277394 | orchestrator | 2026-01-02 01:04:10 | INFO  | Task 1540bd3f-a0a2-4b09-bae7-062602e769fc is in state STARTED 2026-01-02 01:04:10.277471 | orchestrator | 2026-01-02 01:04:10 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:04:13.316732 | orchestrator | 2026-01-02 01:04:13 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:04:13.318490 | orchestrator | 2026-01-02 01:04:13 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:04:13.321262 | orchestrator | 2026-01-02 01:04:13 | INFO  | Task 4979e630-3360-4cea-a090-fc1f1ee5eb66 is in state STARTED 2026-01-02 01:04:13.323516 | orchestrator | 2026-01-02 01:04:13 | INFO  | Task 1540bd3f-a0a2-4b09-bae7-062602e769fc is in state STARTED 2026-01-02 01:04:13.323590 | orchestrator | 2026-01-02 01:04:13 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:04:16.371664 | orchestrator | 2026-01-02 01:04:16 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:04:16.373766 | orchestrator | 2026-01-02 01:04:16 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:04:16.375506 | orchestrator | 2026-01-02 01:04:16 | INFO  | Task 4979e630-3360-4cea-a090-fc1f1ee5eb66 is in state STARTED 2026-01-02 01:04:16.377590 | orchestrator | 2026-01-02 01:04:16 | INFO  | Task 1540bd3f-a0a2-4b09-bae7-062602e769fc is in state STARTED 2026-01-02 01:04:16.377647 | orchestrator | 2026-01-02 01:04:16 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:04:19.420735 | orchestrator | 2026-01-02 01:04:19 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:04:19.423496 | orchestrator | 2026-01-02 01:04:19 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:04:19.424141 | orchestrator | 2026-01-02 01:04:19 | INFO  | Task 4979e630-3360-4cea-a090-fc1f1ee5eb66 is in state STARTED 2026-01-02 01:04:19.426141 | orchestrator | 2026-01-02 01:04:19 | INFO  | Task 1540bd3f-a0a2-4b09-bae7-062602e769fc is in state STARTED 2026-01-02 01:04:19.426169 | orchestrator | 2026-01-02 01:04:19 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:04:22.473562 | orchestrator | 2026-01-02 01:04:22 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:04:22.476449 | orchestrator | 2026-01-02 01:04:22 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:04:22.478526 | orchestrator | 2026-01-02 01:04:22 | INFO  | Task 4979e630-3360-4cea-a090-fc1f1ee5eb66 is in state STARTED 2026-01-02 01:04:22.480871 | orchestrator | 2026-01-02 01:04:22 | INFO  | Task 1540bd3f-a0a2-4b09-bae7-062602e769fc is in state STARTED 2026-01-02 01:04:22.480937 | orchestrator | 2026-01-02 01:04:22 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:04:25.525235 | orchestrator | 2026-01-02 01:04:25 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:04:25.527447 | orchestrator | 2026-01-02 01:04:25 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:04:25.529628 | orchestrator | 2026-01-02 01:04:25 | INFO  | Task 4979e630-3360-4cea-a090-fc1f1ee5eb66 is in state STARTED 2026-01-02 01:04:25.533679 | orchestrator | 2026-01-02 01:04:25 | INFO  | Task 1540bd3f-a0a2-4b09-bae7-062602e769fc is in state STARTED 2026-01-02 01:04:25.533725 | orchestrator | 2026-01-02 01:04:25 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:04:28.570961 | orchestrator | 2026-01-02 01:04:28 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:04:28.571853 | orchestrator | 2026-01-02 01:04:28 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:04:28.573257 | orchestrator | 2026-01-02 01:04:28 | INFO  | Task 4979e630-3360-4cea-a090-fc1f1ee5eb66 is in state STARTED 2026-01-02 01:04:28.574454 | orchestrator | 2026-01-02 01:04:28 | INFO  | Task 1540bd3f-a0a2-4b09-bae7-062602e769fc is in state STARTED 2026-01-02 01:04:28.574520 | orchestrator | 2026-01-02 01:04:28 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:04:31.614774 | orchestrator | 2026-01-02 01:04:31 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:04:31.615035 | orchestrator | 2026-01-02 01:04:31 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:04:31.616068 | orchestrator | 2026-01-02 01:04:31 | INFO  | Task 4979e630-3360-4cea-a090-fc1f1ee5eb66 is in state STARTED 2026-01-02 01:04:31.619508 | orchestrator | 2026-01-02 01:04:31 | INFO  | Task 1540bd3f-a0a2-4b09-bae7-062602e769fc is in state STARTED 2026-01-02 01:04:31.619571 | orchestrator | 2026-01-02 01:04:31 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:04:34.664283 | orchestrator | 2026-01-02 01:04:34 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:04:34.665765 | orchestrator | 2026-01-02 01:04:34 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:04:34.667271 | orchestrator | 2026-01-02 01:04:34 | INFO  | Task 4979e630-3360-4cea-a090-fc1f1ee5eb66 is in state STARTED 2026-01-02 01:04:34.668727 | orchestrator | 2026-01-02 01:04:34 | INFO  | Task 1540bd3f-a0a2-4b09-bae7-062602e769fc is in state STARTED 2026-01-02 01:04:34.668844 | orchestrator | 2026-01-02 01:04:34 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:04:37.710876 | orchestrator | 2026-01-02 01:04:37 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:04:37.712519 | orchestrator | 2026-01-02 01:04:37 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:04:37.714420 | orchestrator | 2026-01-02 01:04:37 | INFO  | Task 4979e630-3360-4cea-a090-fc1f1ee5eb66 is in state STARTED 2026-01-02 01:04:37.715873 | orchestrator | 2026-01-02 01:04:37 | INFO  | Task 1540bd3f-a0a2-4b09-bae7-062602e769fc is in state STARTED 2026-01-02 01:04:37.715976 | orchestrator | 2026-01-02 01:04:37 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:04:40.767674 | orchestrator | 2026-01-02 01:04:40 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:04:40.770313 | orchestrator | 2026-01-02 01:04:40 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:04:40.772074 | orchestrator | 2026-01-02 01:04:40 | INFO  | Task 4979e630-3360-4cea-a090-fc1f1ee5eb66 is in state STARTED 2026-01-02 01:04:40.774577 | orchestrator | 2026-01-02 01:04:40 | INFO  | Task 1540bd3f-a0a2-4b09-bae7-062602e769fc is in state STARTED 2026-01-02 01:04:40.774631 | orchestrator | 2026-01-02 01:04:40 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:04:43.816710 | orchestrator | 2026-01-02 01:04:43 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:04:43.817420 | orchestrator | 2026-01-02 01:04:43 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:04:43.819498 | orchestrator | 2026-01-02 01:04:43 | INFO  | Task 4979e630-3360-4cea-a090-fc1f1ee5eb66 is in state STARTED 2026-01-02 01:04:43.823025 | orchestrator | 2026-01-02 01:04:43 | INFO  | Task 3f9b376f-5903-439b-9bc9-296c6849f6fe is in state STARTED 2026-01-02 01:04:43.827582 | orchestrator | 2026-01-02 01:04:43 | INFO  | Task 1540bd3f-a0a2-4b09-bae7-062602e769fc is in state SUCCESS 2026-01-02 01:04:43.828009 | orchestrator | 2026-01-02 01:04:43.828034 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-01-02 01:04:43.828044 | orchestrator | 2.16.14 2026-01-02 01:04:43.828054 | orchestrator | 2026-01-02 01:04:43.828063 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2026-01-02 01:04:43.828072 | orchestrator | 2026-01-02 01:04:43.828081 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2026-01-02 01:04:43.828090 | orchestrator | Friday 02 January 2026 01:00:48 +0000 (0:00:00.267) 0:00:00.267 ******** 2026-01-02 01:04:43.828099 | orchestrator | changed: [testbed-manager] 2026-01-02 01:04:43.828109 | orchestrator | 2026-01-02 01:04:43.828118 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2026-01-02 01:04:43.828127 | orchestrator | Friday 02 January 2026 01:00:49 +0000 (0:00:01.303) 0:00:01.571 ******** 2026-01-02 01:04:43.828135 | orchestrator | changed: [testbed-manager] 2026-01-02 01:04:43.828144 | orchestrator | 2026-01-02 01:04:43.828152 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2026-01-02 01:04:43.828161 | orchestrator | Friday 02 January 2026 01:00:50 +0000 (0:00:01.015) 0:00:02.586 ******** 2026-01-02 01:04:43.828170 | orchestrator | changed: [testbed-manager] 2026-01-02 01:04:43.828178 | orchestrator | 2026-01-02 01:04:43.828187 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2026-01-02 01:04:43.828196 | orchestrator | Friday 02 January 2026 01:00:51 +0000 (0:00:01.090) 0:00:03.677 ******** 2026-01-02 01:04:43.828204 | orchestrator | changed: [testbed-manager] 2026-01-02 01:04:43.828213 | orchestrator | 2026-01-02 01:04:43.828222 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2026-01-02 01:04:43.828230 | orchestrator | Friday 02 January 2026 01:00:52 +0000 (0:00:01.164) 0:00:04.842 ******** 2026-01-02 01:04:43.828239 | orchestrator | changed: [testbed-manager] 2026-01-02 01:04:43.828248 | orchestrator | 2026-01-02 01:04:43.828256 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2026-01-02 01:04:43.828265 | orchestrator | Friday 02 January 2026 01:00:53 +0000 (0:00:00.996) 0:00:05.838 ******** 2026-01-02 01:04:43.828274 | orchestrator | changed: [testbed-manager] 2026-01-02 01:04:43.828283 | orchestrator | 2026-01-02 01:04:43.828292 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2026-01-02 01:04:43.828301 | orchestrator | Friday 02 January 2026 01:00:54 +0000 (0:00:01.039) 0:00:06.877 ******** 2026-01-02 01:04:43.828309 | orchestrator | changed: [testbed-manager] 2026-01-02 01:04:43.828318 | orchestrator | 2026-01-02 01:04:43.828327 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2026-01-02 01:04:43.828335 | orchestrator | Friday 02 January 2026 01:00:57 +0000 (0:00:02.099) 0:00:08.977 ******** 2026-01-02 01:04:43.828344 | orchestrator | changed: [testbed-manager] 2026-01-02 01:04:43.828352 | orchestrator | 2026-01-02 01:04:43.828361 | orchestrator | TASK [Create admin user] ******************************************************* 2026-01-02 01:04:43.828370 | orchestrator | Friday 02 January 2026 01:00:58 +0000 (0:00:01.203) 0:00:10.180 ******** 2026-01-02 01:04:43.828422 | orchestrator | changed: [testbed-manager] 2026-01-02 01:04:43.828432 | orchestrator | 2026-01-02 01:04:43.828441 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2026-01-02 01:04:43.828450 | orchestrator | Friday 02 January 2026 01:01:55 +0000 (0:00:57.312) 0:01:07.493 ******** 2026-01-02 01:04:43.828458 | orchestrator | skipping: [testbed-manager] 2026-01-02 01:04:43.828467 | orchestrator | 2026-01-02 01:04:43.828476 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-01-02 01:04:43.828485 | orchestrator | 2026-01-02 01:04:43.828493 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-01-02 01:04:43.828502 | orchestrator | Friday 02 January 2026 01:01:55 +0000 (0:00:00.155) 0:01:07.649 ******** 2026-01-02 01:04:43.828511 | orchestrator | changed: [testbed-node-0] 2026-01-02 01:04:43.828520 | orchestrator | 2026-01-02 01:04:43.828528 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-01-02 01:04:43.828537 | orchestrator | 2026-01-02 01:04:43.828546 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-01-02 01:04:43.828554 | orchestrator | Friday 02 January 2026 01:02:07 +0000 (0:00:11.721) 0:01:19.370 ******** 2026-01-02 01:04:43.828563 | orchestrator | changed: [testbed-node-1] 2026-01-02 01:04:43.828572 | orchestrator | 2026-01-02 01:04:43.828580 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-01-02 01:04:43.828589 | orchestrator | 2026-01-02 01:04:43.828598 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-01-02 01:04:43.828606 | orchestrator | Friday 02 January 2026 01:02:08 +0000 (0:00:01.346) 0:01:20.717 ******** 2026-01-02 01:04:43.828615 | orchestrator | changed: [testbed-node-2] 2026-01-02 01:04:43.828624 | orchestrator | 2026-01-02 01:04:43.828633 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-02 01:04:43.828642 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-02 01:04:43.828651 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-02 01:04:43.828660 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-02 01:04:43.828669 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-02 01:04:43.828678 | orchestrator | 2026-01-02 01:04:43.828686 | orchestrator | 2026-01-02 01:04:43.828695 | orchestrator | 2026-01-02 01:04:43.828704 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-02 01:04:43.828713 | orchestrator | Friday 02 January 2026 01:02:20 +0000 (0:00:11.269) 0:01:31.986 ******** 2026-01-02 01:04:43.828721 | orchestrator | =============================================================================== 2026-01-02 01:04:43.828736 | orchestrator | Create admin user ------------------------------------------------------ 57.31s 2026-01-02 01:04:43.828755 | orchestrator | Restart ceph manager service ------------------------------------------- 24.34s 2026-01-02 01:04:43.828764 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 2.10s 2026-01-02 01:04:43.828772 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 1.30s 2026-01-02 01:04:43.828781 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.20s 2026-01-02 01:04:43.828791 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.16s 2026-01-02 01:04:43.828799 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 1.09s 2026-01-02 01:04:43.828808 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 1.04s 2026-01-02 01:04:43.828817 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 1.02s 2026-01-02 01:04:43.828826 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 1.00s 2026-01-02 01:04:43.828841 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.16s 2026-01-02 01:04:43.828849 | orchestrator | 2026-01-02 01:04:43.830985 | orchestrator | 2026-01-02 01:04:43.831022 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-02 01:04:43.831039 | orchestrator | 2026-01-02 01:04:43.831056 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-02 01:04:43.831066 | orchestrator | Friday 02 January 2026 01:01:44 +0000 (0:00:00.241) 0:00:00.241 ******** 2026-01-02 01:04:43.831098 | orchestrator | ok: [testbed-manager] 2026-01-02 01:04:43.831109 | orchestrator | ok: [testbed-node-0] 2026-01-02 01:04:43.831118 | orchestrator | ok: [testbed-node-1] 2026-01-02 01:04:43.831127 | orchestrator | ok: [testbed-node-2] 2026-01-02 01:04:43.831135 | orchestrator | ok: [testbed-node-3] 2026-01-02 01:04:43.831144 | orchestrator | ok: [testbed-node-4] 2026-01-02 01:04:43.831152 | orchestrator | ok: [testbed-node-5] 2026-01-02 01:04:43.831161 | orchestrator | 2026-01-02 01:04:43.831170 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-02 01:04:43.831178 | orchestrator | Friday 02 January 2026 01:01:45 +0000 (0:00:00.889) 0:00:01.130 ******** 2026-01-02 01:04:43.831187 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2026-01-02 01:04:43.831256 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2026-01-02 01:04:43.831308 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2026-01-02 01:04:43.831318 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2026-01-02 01:04:43.831326 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2026-01-02 01:04:43.831335 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2026-01-02 01:04:43.831343 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2026-01-02 01:04:43.831352 | orchestrator | 2026-01-02 01:04:43.831361 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2026-01-02 01:04:43.831369 | orchestrator | 2026-01-02 01:04:43.831378 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-01-02 01:04:43.831387 | orchestrator | Friday 02 January 2026 01:01:46 +0000 (0:00:00.695) 0:00:01.826 ******** 2026-01-02 01:04:43.831419 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-02 01:04:43.831429 | orchestrator | 2026-01-02 01:04:43.831438 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2026-01-02 01:04:43.831447 | orchestrator | Friday 02 January 2026 01:01:47 +0000 (0:00:01.422) 0:00:03.248 ******** 2026-01-02 01:04:43.831458 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-02 01:04:43.831471 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-02 01:04:43.831489 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-01-02 01:04:43.831523 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-02 01:04:43.831534 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-02 01:04:43.831543 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-02 01:04:43.831552 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-02 01:04:43.831562 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 01:04:43.831571 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-02 01:04:43.831586 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 01:04:43.831668 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 01:04:43.831686 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-02 01:04:43.831697 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-02 01:04:43.831707 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-02 01:04:43.831717 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 01:04:43.831727 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-02 01:04:43.831736 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 01:04:43.831755 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 01:04:43.831771 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-01-02 01:04:43.831802 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-02 01:04:43.831812 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-02 01:04:43.831822 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-02 01:04:43.831831 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-02 01:04:43.831846 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-02 01:04:43.831897 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-02 01:04:43.831908 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 01:04:43.831924 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 01:04:43.831934 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 01:04:43.831943 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 01:04:43.831987 | orchestrator | 2026-01-02 01:04:43.831997 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-01-02 01:04:43.832006 | orchestrator | Friday 02 January 2026 01:01:50 +0000 (0:00:02.734) 0:00:05.982 ******** 2026-01-02 01:04:43.832016 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-02 01:04:43.832024 | orchestrator | 2026-01-02 01:04:43.832085 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2026-01-02 01:04:43.832096 | orchestrator | Friday 02 January 2026 01:01:51 +0000 (0:00:01.323) 0:00:07.305 ******** 2026-01-02 01:04:43.832106 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-01-02 01:04:43.832126 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-02 01:04:43.832135 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-02 01:04:43.832151 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-02 01:04:43.832161 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-02 01:04:43.832192 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-02 01:04:43.832202 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-02 01:04:43.832222 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-02 01:04:43.832231 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 01:04:43.832246 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 01:04:43.832255 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 01:04:43.832270 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-02 01:04:43.832280 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-02 01:04:43.832290 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-02 01:04:43.832304 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-02 01:04:43.832313 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 01:04:43.832322 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 01:04:43.832335 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 01:04:43.832345 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-02 01:04:43.832359 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-02 01:04:43.832369 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-02 01:04:43.832379 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-01-02 01:04:43.832409 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-02 01:04:43.832420 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-02 01:04:43.832433 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-02 01:04:43.832901 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 01:04:43.832915 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 01:04:43.832925 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 01:04:43.832941 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 01:04:43.832958 | orchestrator | 2026-01-02 01:04:43.832974 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2026-01-02 01:04:43.832989 | orchestrator | Friday 02 January 2026 01:01:57 +0000 (0:00:05.650) 0:00:12.956 ******** 2026-01-02 01:04:43.833007 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-01-02 01:04:43.833031 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-02 01:04:43.833048 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-02 01:04:43.833074 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 01:04:43.833092 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-02 01:04:43.833108 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 01:04:43.833129 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 01:04:43.833139 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-02 01:04:43.833149 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-02 01:04:43.833163 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-01-02 01:04:43.833178 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 01:04:43.833188 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 01:04:43.833202 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-02 01:04:43.833211 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-02 01:04:43.833220 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 01:04:43.833229 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 01:04:43.833238 | orchestrator | skipping: [testbed-node-0] 2026-01-02 01:04:43.833248 | orchestrator | skipping: [testbed-manager] 2026-01-02 01:04:43.833257 | orchestrator | skipping: [testbed-node-1] 2026-01-02 01:04:43.833269 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-02 01:04:43.833284 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 01:04:43.833294 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-02 01:04:43.833308 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-02 01:04:43.833317 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 01:04:43.833326 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-02 01:04:43.833335 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-02 01:04:43.833344 | orchestrator | skipping: [testbed-node-3] 2026-01-02 01:04:43.833353 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-02 01:04:43.833366 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-02 01:04:43.833375 | orchestrator | skipping: [testbed-node-4] 2026-01-02 01:04:43.833410 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 01:04:43.833428 | orchestrator | skipping: [testbed-node-2] 2026-01-02 01:04:43.833437 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-02 01:04:43.833447 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-02 01:04:43.833456 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-02 01:04:43.833465 | orchestrator | skipping: [testbed-node-5] 2026-01-02 01:04:43.833474 | orchestrator | 2026-01-02 01:04:43.833484 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2026-01-02 01:04:43.833501 | orchestrator | Friday 02 January 2026 01:01:59 +0000 (0:00:02.391) 0:00:15.347 ******** 2026-01-02 01:04:43.833516 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-02 01:04:43.833543 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-01-02 01:04:43.833567 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 01:04:43.833592 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 01:04:43.833609 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-02 01:04:43.833627 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-02 01:04:43.833643 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-02 01:04:43.833659 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 01:04:43.833674 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 01:04:43.833689 | orchestrator | skipping: [testbed-node-0] 2026-01-02 01:04:43.833712 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-02 01:04:43.833744 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 01:04:43.833759 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-02 01:04:43.833771 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-01-02 01:04:43.833783 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-02 01:04:43.833794 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 01:04:43.833804 | orchestrator | skipping: [testbed-node-1] 2026-01-02 01:04:43.833819 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 01:04:43.833830 | orchestrator | skipping: [testbed-manager] 2026-01-02 01:04:43.833841 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-02 01:04:43.833864 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-02 01:04:43.833876 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 01:04:43.833886 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-02 01:04:43.833895 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-02 01:04:43.833904 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 01:04:43.833913 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-02 01:04:43.833922 | orchestrator | skipping: [testbed-node-3] 2026-01-02 01:04:43.833935 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-02 01:04:43.833950 | orchestrator | skipping: [testbed-node-4] 2026-01-02 01:04:43.833960 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-02 01:04:43.833974 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-02 01:04:43.833984 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 01:04:43.833993 | orchestrator | skipping: [testbed-node-2] 2026-01-02 01:04:43.834002 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-02 01:04:43.834011 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-02 01:04:43.834061 | orchestrator | skipping: [testbed-node-5] 2026-01-02 01:04:43.834071 | orchestrator | 2026-01-02 01:04:43.834080 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2026-01-02 01:04:43.834089 | orchestrator | Friday 02 January 2026 01:02:02 +0000 (0:00:02.760) 0:00:18.108 ******** 2026-01-02 01:04:43.834098 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-02 01:04:43.834112 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-01-02 01:04:43.834133 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-02 01:04:43.834143 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-02 01:04:43.834152 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-02 01:04:43.834161 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-02 01:04:43.834170 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-02 01:04:43.834179 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 01:04:43.834193 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-02 01:04:43.834207 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-02 01:04:43.834221 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 01:04:43.834231 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 01:04:43.834240 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-02 01:04:43.834249 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-02 01:04:43.834258 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-02 01:04:43.834268 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 01:04:43.834291 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 01:04:43.834301 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 01:04:43.834314 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-02 01:04:43.834324 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-02 01:04:43.834334 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-01-02 01:04:43.834344 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-02 01:04:43.834358 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-02 01:04:43.834368 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-02 01:04:43.834377 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-02 01:04:43.834414 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 01:04:43.834424 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 01:04:43.834461 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 01:04:43.834471 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 01:04:43.834481 | orchestrator | 2026-01-02 01:04:43.834490 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2026-01-02 01:04:43.834505 | orchestrator | Friday 02 January 2026 01:02:08 +0000 (0:00:06.175) 0:00:24.283 ******** 2026-01-02 01:04:43.834514 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-02 01:04:43.834523 | orchestrator | 2026-01-02 01:04:43.834532 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2026-01-02 01:04:43.834541 | orchestrator | Friday 02 January 2026 01:02:10 +0000 (0:00:01.219) 0:00:25.503 ******** 2026-01-02 01:04:43.834550 | orchestrator | skipping: [testbed-manager] 2026-01-02 01:04:43.834559 | orchestrator | skipping: [testbed-node-0] 2026-01-02 01:04:43.834568 | orchestrator | skipping: [testbed-node-1] 2026-01-02 01:04:43.834576 | orchestrator | skipping: [testbed-node-2] 2026-01-02 01:04:43.834585 | orchestrator | skipping: [testbed-node-3] 2026-01-02 01:04:43.834594 | orchestrator | skipping: [testbed-node-4] 2026-01-02 01:04:43.834603 | orchestrator | skipping: [testbed-node-5] 2026-01-02 01:04:43.834611 | orchestrator | 2026-01-02 01:04:43.834620 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2026-01-02 01:04:43.834629 | orchestrator | Friday 02 January 2026 01:02:10 +0000 (0:00:00.690) 0:00:26.193 ******** 2026-01-02 01:04:43.834638 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-02 01:04:43.834647 | orchestrator | 2026-01-02 01:04:43.834656 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2026-01-02 01:04:43.834664 | orchestrator | Friday 02 January 2026 01:02:11 +0000 (0:00:00.766) 0:00:26.960 ******** 2026-01-02 01:04:43.834673 | orchestrator | [WARNING]: Skipped 2026-01-02 01:04:43.834682 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-02 01:04:43.834691 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2026-01-02 01:04:43.834700 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-02 01:04:43.834709 | orchestrator | manager/prometheus.yml.d' is not a directory 2026-01-02 01:04:43.834721 | orchestrator | [WARNING]: Skipped 2026-01-02 01:04:43.834730 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-02 01:04:43.834739 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2026-01-02 01:04:43.834748 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-02 01:04:43.834757 | orchestrator | node-0/prometheus.yml.d' is not a directory 2026-01-02 01:04:43.834766 | orchestrator | [WARNING]: Skipped 2026-01-02 01:04:43.834775 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-02 01:04:43.834783 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2026-01-02 01:04:43.834792 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-02 01:04:43.834801 | orchestrator | node-1/prometheus.yml.d' is not a directory 2026-01-02 01:04:43.834810 | orchestrator | [WARNING]: Skipped 2026-01-02 01:04:43.834819 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-02 01:04:43.834827 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2026-01-02 01:04:43.834842 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-02 01:04:43.834851 | orchestrator | node-2/prometheus.yml.d' is not a directory 2026-01-02 01:04:43.834860 | orchestrator | [WARNING]: Skipped 2026-01-02 01:04:43.834869 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-02 01:04:43.834878 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2026-01-02 01:04:43.834886 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-02 01:04:43.834895 | orchestrator | node-3/prometheus.yml.d' is not a directory 2026-01-02 01:04:43.834904 | orchestrator | [WARNING]: Skipped 2026-01-02 01:04:43.834913 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-02 01:04:43.834922 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2026-01-02 01:04:43.834935 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-02 01:04:43.834944 | orchestrator | node-4/prometheus.yml.d' is not a directory 2026-01-02 01:04:43.834953 | orchestrator | [WARNING]: Skipped 2026-01-02 01:04:43.834961 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-02 01:04:43.834970 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2026-01-02 01:04:43.834979 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-02 01:04:43.834988 | orchestrator | node-5/prometheus.yml.d' is not a directory 2026-01-02 01:04:43.834996 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-02 01:04:43.835005 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-02 01:04:43.835014 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-01-02 01:04:43.835023 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-01-02 01:04:43.835031 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-01-02 01:04:43.835040 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-01-02 01:04:43.835049 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-01-02 01:04:43.835057 | orchestrator | 2026-01-02 01:04:43.835066 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2026-01-02 01:04:43.835075 | orchestrator | Friday 02 January 2026 01:02:13 +0000 (0:00:01.800) 0:00:28.761 ******** 2026-01-02 01:04:43.835084 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-01-02 01:04:43.835093 | orchestrator | skipping: [testbed-node-0] 2026-01-02 01:04:43.835102 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-01-02 01:04:43.835111 | orchestrator | skipping: [testbed-node-1] 2026-01-02 01:04:43.835119 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-01-02 01:04:43.835128 | orchestrator | skipping: [testbed-node-2] 2026-01-02 01:04:43.835137 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-01-02 01:04:43.835146 | orchestrator | skipping: [testbed-node-3] 2026-01-02 01:04:43.835155 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-01-02 01:04:43.835163 | orchestrator | skipping: [testbed-node-4] 2026-01-02 01:04:43.835172 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-01-02 01:04:43.835181 | orchestrator | skipping: [testbed-node-5] 2026-01-02 01:04:43.835190 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2026-01-02 01:04:43.835198 | orchestrator | 2026-01-02 01:04:43.835207 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2026-01-02 01:04:43.835216 | orchestrator | Friday 02 January 2026 01:02:27 +0000 (0:00:14.666) 0:00:43.428 ******** 2026-01-02 01:04:43.835225 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-01-02 01:04:43.835233 | orchestrator | skipping: [testbed-node-0] 2026-01-02 01:04:43.835242 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-01-02 01:04:43.835251 | orchestrator | skipping: [testbed-node-1] 2026-01-02 01:04:43.835260 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-01-02 01:04:43.835268 | orchestrator | skipping: [testbed-node-2] 2026-01-02 01:04:43.835277 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-01-02 01:04:43.835286 | orchestrator | skipping: [testbed-node-3] 2026-01-02 01:04:43.835299 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-01-02 01:04:43.835308 | orchestrator | skipping: [testbed-node-4] 2026-01-02 01:04:43.835316 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-01-02 01:04:43.835328 | orchestrator | skipping: [testbed-node-5] 2026-01-02 01:04:43.835337 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2026-01-02 01:04:43.835346 | orchestrator | 2026-01-02 01:04:43.835355 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2026-01-02 01:04:43.835363 | orchestrator | Friday 02 January 2026 01:02:31 +0000 (0:00:03.421) 0:00:46.849 ******** 2026-01-02 01:04:43.835372 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-01-02 01:04:43.835381 | orchestrator | skipping: [testbed-node-0] 2026-01-02 01:04:43.835409 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-01-02 01:04:43.835419 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-01-02 01:04:43.835428 | orchestrator | skipping: [testbed-node-1] 2026-01-02 01:04:43.835437 | orchestrator | skipping: [testbed-node-2] 2026-01-02 01:04:43.835445 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-01-02 01:04:43.835454 | orchestrator | skipping: [testbed-node-3] 2026-01-02 01:04:43.835463 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-01-02 01:04:43.835472 | orchestrator | skipping: [testbed-node-4] 2026-01-02 01:04:43.835481 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2026-01-02 01:04:43.835489 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-01-02 01:04:43.835498 | orchestrator | skipping: [testbed-node-5] 2026-01-02 01:04:43.835507 | orchestrator | 2026-01-02 01:04:43.835516 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2026-01-02 01:04:43.835525 | orchestrator | Friday 02 January 2026 01:02:33 +0000 (0:00:01.671) 0:00:48.520 ******** 2026-01-02 01:04:43.835533 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-02 01:04:43.835542 | orchestrator | 2026-01-02 01:04:43.835551 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2026-01-02 01:04:43.835560 | orchestrator | Friday 02 January 2026 01:02:33 +0000 (0:00:00.740) 0:00:49.261 ******** 2026-01-02 01:04:43.835569 | orchestrator | skipping: [testbed-manager] 2026-01-02 01:04:43.835577 | orchestrator | skipping: [testbed-node-0] 2026-01-02 01:04:43.835586 | orchestrator | skipping: [testbed-node-1] 2026-01-02 01:04:43.835595 | orchestrator | skipping: [testbed-node-2] 2026-01-02 01:04:43.835603 | orchestrator | skipping: [testbed-node-3] 2026-01-02 01:04:43.835612 | orchestrator | skipping: [testbed-node-4] 2026-01-02 01:04:43.835621 | orchestrator | skipping: [testbed-node-5] 2026-01-02 01:04:43.835630 | orchestrator | 2026-01-02 01:04:43.835638 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2026-01-02 01:04:43.835647 | orchestrator | Friday 02 January 2026 01:02:34 +0000 (0:00:00.739) 0:00:50.001 ******** 2026-01-02 01:04:43.835656 | orchestrator | skipping: [testbed-manager] 2026-01-02 01:04:43.835665 | orchestrator | skipping: [testbed-node-3] 2026-01-02 01:04:43.835673 | orchestrator | skipping: [testbed-node-4] 2026-01-02 01:04:43.835682 | orchestrator | skipping: [testbed-node-5] 2026-01-02 01:04:43.835691 | orchestrator | changed: [testbed-node-0] 2026-01-02 01:04:43.835699 | orchestrator | changed: [testbed-node-1] 2026-01-02 01:04:43.835708 | orchestrator | changed: [testbed-node-2] 2026-01-02 01:04:43.835717 | orchestrator | 2026-01-02 01:04:43.835725 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2026-01-02 01:04:43.835734 | orchestrator | Friday 02 January 2026 01:02:36 +0000 (0:00:02.325) 0:00:52.327 ******** 2026-01-02 01:04:43.835749 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-01-02 01:04:43.835758 | orchestrator | skipping: [testbed-manager] 2026-01-02 01:04:43.835766 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-01-02 01:04:43.835775 | orchestrator | skipping: [testbed-node-1] 2026-01-02 01:04:43.835784 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-01-02 01:04:43.835793 | orchestrator | skipping: [testbed-node-0] 2026-01-02 01:04:43.835801 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-01-02 01:04:43.835810 | orchestrator | skipping: [testbed-node-2] 2026-01-02 01:04:43.835819 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-01-02 01:04:43.835827 | orchestrator | skipping: [testbed-node-3] 2026-01-02 01:04:43.835836 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-01-02 01:04:43.835845 | orchestrator | skipping: [testbed-node-5] 2026-01-02 01:04:43.835854 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-01-02 01:04:43.835862 | orchestrator | skipping: [testbed-node-4] 2026-01-02 01:04:43.835871 | orchestrator | 2026-01-02 01:04:43.835880 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2026-01-02 01:04:43.835892 | orchestrator | Friday 02 January 2026 01:02:38 +0000 (0:00:01.735) 0:00:54.062 ******** 2026-01-02 01:04:43.835901 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-01-02 01:04:43.835910 | orchestrator | skipping: [testbed-node-0] 2026-01-02 01:04:43.835919 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-01-02 01:04:43.835928 | orchestrator | skipping: [testbed-node-2] 2026-01-02 01:04:43.835936 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-01-02 01:04:43.835945 | orchestrator | skipping: [testbed-node-1] 2026-01-02 01:04:43.835954 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-01-02 01:04:43.835963 | orchestrator | skipping: [testbed-node-3] 2026-01-02 01:04:43.835972 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-01-02 01:04:43.835981 | orchestrator | skipping: [testbed-node-4] 2026-01-02 01:04:43.835994 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2026-01-02 01:04:43.836004 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-01-02 01:04:43.836012 | orchestrator | skipping: [testbed-node-5] 2026-01-02 01:04:43.836021 | orchestrator | 2026-01-02 01:04:43.836030 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2026-01-02 01:04:43.836039 | orchestrator | Friday 02 January 2026 01:02:39 +0000 (0:00:01.380) 0:00:55.443 ******** 2026-01-02 01:04:43.836048 | orchestrator | [WARNING]: Skipped 2026-01-02 01:04:43.836057 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2026-01-02 01:04:43.836066 | orchestrator | due to this access issue: 2026-01-02 01:04:43.836074 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2026-01-02 01:04:43.836083 | orchestrator | not a directory 2026-01-02 01:04:43.836092 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-02 01:04:43.836101 | orchestrator | 2026-01-02 01:04:43.836110 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2026-01-02 01:04:43.836118 | orchestrator | Friday 02 January 2026 01:02:41 +0000 (0:00:01.146) 0:00:56.589 ******** 2026-01-02 01:04:43.836132 | orchestrator | skipping: [testbed-manager] 2026-01-02 01:04:43.836141 | orchestrator | skipping: [testbed-node-0] 2026-01-02 01:04:43.836150 | orchestrator | skipping: [testbed-node-1] 2026-01-02 01:04:43.836159 | orchestrator | skipping: [testbed-node-2] 2026-01-02 01:04:43.836167 | orchestrator | skipping: [testbed-node-3] 2026-01-02 01:04:43.836176 | orchestrator | skipping: [testbed-node-4] 2026-01-02 01:04:43.836185 | orchestrator | skipping: [testbed-node-5] 2026-01-02 01:04:43.836193 | orchestrator | 2026-01-02 01:04:43.836202 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2026-01-02 01:04:43.836211 | orchestrator | Friday 02 January 2026 01:02:42 +0000 (0:00:00.895) 0:00:57.485 ******** 2026-01-02 01:04:43.836220 | orchestrator | skipping: [testbed-manager] 2026-01-02 01:04:43.836229 | orchestrator | skipping: [testbed-node-0] 2026-01-02 01:04:43.836237 | orchestrator | skipping: [testbed-node-1] 2026-01-02 01:04:43.836246 | orchestrator | skipping: [testbed-node-2] 2026-01-02 01:04:43.836254 | orchestrator | skipping: [testbed-node-3] 2026-01-02 01:04:43.836263 | orchestrator | skipping: [testbed-node-4] 2026-01-02 01:04:43.836272 | orchestrator | skipping: [testbed-node-5] 2026-01-02 01:04:43.836281 | orchestrator | 2026-01-02 01:04:43.836289 | orchestrator | TASK [service-check-containers : prometheus | Check containers] **************** 2026-01-02 01:04:43.836298 | orchestrator | Friday 02 January 2026 01:02:42 +0000 (0:00:00.713) 0:00:58.198 ******** 2026-01-02 01:04:43.836308 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-02 01:04:43.836317 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-02 01:04:43.836331 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-01-02 01:04:43.836346 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'i2026-01-02 01:04:43 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:04:43.836356 | orchestrator | mage': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-02 01:04:43.836371 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-02 01:04:43.836381 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-02 01:04:43.836403 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-02 01:04:43.836413 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-02 01:04:43.836423 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 01:04:43.836445 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 01:04:43.836457 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 01:04:43.836472 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-02 01:04:43.836487 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-02 01:04:43.836497 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-02 01:04:43.836506 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-02 01:04:43.836515 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 01:04:43.836524 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 01:04:43.836537 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 01:04:43.836546 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-02 01:04:43.836567 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-01-02 01:04:43.836577 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-02 01:04:43.836586 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-02 01:04:43.836595 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-02 01:04:43.836604 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-02 01:04:43.836620 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-02 01:04:43.836639 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 01:04:43.836652 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 01:04:43.836661 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 01:04:43.836671 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-02 01:04:43.836680 | orchestrator | 2026-01-02 01:04:43.836688 | orchestrator | TASK [service-check-containers : prometheus | Notify handlers to restart containers] *** 2026-01-02 01:04:43.836697 | orchestrator | Friday 02 January 2026 01:02:47 +0000 (0:00:04.426) 0:01:02.625 ******** 2026-01-02 01:04:43.836706 | orchestrator | changed: [testbed-manager] => { 2026-01-02 01:04:43.836715 | orchestrator |  "msg": "Notifying handlers" 2026-01-02 01:04:43.836724 | orchestrator | } 2026-01-02 01:04:43.836732 | orchestrator | changed: [testbed-node-0] => { 2026-01-02 01:04:43.836741 | orchestrator |  "msg": "Notifying handlers" 2026-01-02 01:04:43.836750 | orchestrator | } 2026-01-02 01:04:43.836759 | orchestrator | changed: [testbed-node-1] => { 2026-01-02 01:04:43.836767 | orchestrator |  "msg": "Notifying handlers" 2026-01-02 01:04:43.836776 | orchestrator | } 2026-01-02 01:04:43.836785 | orchestrator | changed: [testbed-node-2] => { 2026-01-02 01:04:43.836793 | orchestrator |  "msg": "Notifying handlers" 2026-01-02 01:04:43.836802 | orchestrator | } 2026-01-02 01:04:43.836810 | orchestrator | changed: [testbed-node-3] => { 2026-01-02 01:04:43.836819 | orchestrator |  "msg": "Notifying handlers" 2026-01-02 01:04:43.836827 | orchestrator | } 2026-01-02 01:04:43.836836 | orchestrator | changed: [testbed-node-4] => { 2026-01-02 01:04:43.836845 | orchestrator |  "msg": "Notifying handlers" 2026-01-02 01:04:43.836853 | orchestrator | } 2026-01-02 01:04:43.836862 | orchestrator | changed: [testbed-node-5] => { 2026-01-02 01:04:43.836870 | orchestrator |  "msg": "Notifying handlers" 2026-01-02 01:04:43.836879 | orchestrator | } 2026-01-02 01:04:43.836887 | orchestrator | 2026-01-02 01:04:43.836896 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-01-02 01:04:43.836905 | orchestrator | Friday 02 January 2026 01:02:48 +0000 (0:00:00.944) 0:01:03.570 ******** 2026-01-02 01:04:43.836928 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-01-02 01:04:43.836949 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-02 01:04:43.836959 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-02 01:04:43.836969 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-01-02 01:04:43.836978 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 01:04:43.836988 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-02 01:04:43.837005 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 01:04:43.837015 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 01:04:43.837029 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-02 01:04:43.837038 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 01:04:43.837047 | orchestrator | skipping: [testbed-manager] 2026-01-02 01:04:43.837057 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-02 01:04:43.837066 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 01:04:43.837075 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 01:04:43.837084 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-02 01:04:43.837102 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 01:04:43.837111 | orchestrator | skipping: [testbed-node-0] 2026-01-02 01:04:43.837120 | orchestrator | skipping: [testbed-node-1] 2026-01-02 01:04:43.837129 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-02 01:04:43.837143 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 01:04:43.837153 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 01:04:43.837162 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-02 01:04:43.837171 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-02 01:04:43.837180 | orchestrator | skipping: [testbed-node-2] 2026-01-02 01:04:43.837189 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-02 01:04:43.837203 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-02 01:04:43.837216 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-02 01:04:43.837226 | orchestrator | skipping: [testbed-node-3] 2026-01-02 01:04:43.837235 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-02 01:04:43.837249 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-02 01:04:43.837259 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-02 01:04:43.837268 | orchestrator | skipping: [testbed-node-4] 2026-01-02 01:04:43.837277 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-02 01:04:43.837286 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-02 01:04:43.837299 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-02 01:04:43.837308 | orchestrator | skipping: [testbed-node-5] 2026-01-02 01:04:43.837317 | orchestrator | 2026-01-02 01:04:43.837326 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2026-01-02 01:04:43.837335 | orchestrator | Friday 02 January 2026 01:02:49 +0000 (0:00:01.870) 0:01:05.441 ******** 2026-01-02 01:04:43.837344 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-01-02 01:04:43.837352 | orchestrator | skipping: [testbed-manager] 2026-01-02 01:04:43.837361 | orchestrator | 2026-01-02 01:04:43.837370 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-01-02 01:04:43.837378 | orchestrator | Friday 02 January 2026 01:02:50 +0000 (0:00:00.927) 0:01:06.368 ******** 2026-01-02 01:04:43.837387 | orchestrator | 2026-01-02 01:04:43.837409 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-01-02 01:04:43.837418 | orchestrator | Friday 02 January 2026 01:02:50 +0000 (0:00:00.066) 0:01:06.434 ******** 2026-01-02 01:04:43.837427 | orchestrator | 2026-01-02 01:04:43.837439 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-01-02 01:04:43.837448 | orchestrator | Friday 02 January 2026 01:02:51 +0000 (0:00:00.062) 0:01:06.497 ******** 2026-01-02 01:04:43.837457 | orchestrator | 2026-01-02 01:04:43.837466 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-01-02 01:04:43.837474 | orchestrator | Friday 02 January 2026 01:02:51 +0000 (0:00:00.062) 0:01:06.560 ******** 2026-01-02 01:04:43.837483 | orchestrator | 2026-01-02 01:04:43.837492 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-01-02 01:04:43.837500 | orchestrator | Friday 02 January 2026 01:02:51 +0000 (0:00:00.062) 0:01:06.622 ******** 2026-01-02 01:04:43.837509 | orchestrator | 2026-01-02 01:04:43.837518 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-01-02 01:04:43.837526 | orchestrator | Friday 02 January 2026 01:02:51 +0000 (0:00:00.063) 0:01:06.685 ******** 2026-01-02 01:04:43.837535 | orchestrator | 2026-01-02 01:04:43.837544 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-01-02 01:04:43.837552 | orchestrator | Friday 02 January 2026 01:02:51 +0000 (0:00:00.061) 0:01:06.746 ******** 2026-01-02 01:04:43.837561 | orchestrator | 2026-01-02 01:04:43.837574 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2026-01-02 01:04:43.837584 | orchestrator | Friday 02 January 2026 01:02:51 +0000 (0:00:00.294) 0:01:07.041 ******** 2026-01-02 01:04:43.837592 | orchestrator | changed: [testbed-manager] 2026-01-02 01:04:43.837601 | orchestrator | 2026-01-02 01:04:43.837610 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2026-01-02 01:04:43.837619 | orchestrator | Friday 02 January 2026 01:03:15 +0000 (0:00:24.032) 0:01:31.074 ******** 2026-01-02 01:04:43.837628 | orchestrator | changed: [testbed-manager] 2026-01-02 01:04:43.837636 | orchestrator | changed: [testbed-node-1] 2026-01-02 01:04:43.837645 | orchestrator | changed: [testbed-node-0] 2026-01-02 01:04:43.837654 | orchestrator | changed: [testbed-node-5] 2026-01-02 01:04:43.837662 | orchestrator | changed: [testbed-node-2] 2026-01-02 01:04:43.837671 | orchestrator | changed: [testbed-node-4] 2026-01-02 01:04:43.837685 | orchestrator | changed: [testbed-node-3] 2026-01-02 01:04:43.837693 | orchestrator | 2026-01-02 01:04:43.837702 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2026-01-02 01:04:43.837711 | orchestrator | Friday 02 January 2026 01:03:28 +0000 (0:00:12.610) 0:01:43.685 ******** 2026-01-02 01:04:43.837720 | orchestrator | changed: [testbed-node-2] 2026-01-02 01:04:43.837728 | orchestrator | changed: [testbed-node-0] 2026-01-02 01:04:43.837737 | orchestrator | changed: [testbed-node-1] 2026-01-02 01:04:43.837746 | orchestrator | 2026-01-02 01:04:43.837754 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2026-01-02 01:04:43.837763 | orchestrator | Friday 02 January 2026 01:03:38 +0000 (0:00:10.295) 0:01:53.980 ******** 2026-01-02 01:04:43.837772 | orchestrator | changed: [testbed-node-1] 2026-01-02 01:04:43.837781 | orchestrator | changed: [testbed-node-0] 2026-01-02 01:04:43.837789 | orchestrator | changed: [testbed-node-2] 2026-01-02 01:04:43.837798 | orchestrator | 2026-01-02 01:04:43.837807 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2026-01-02 01:04:43.837815 | orchestrator | Friday 02 January 2026 01:03:43 +0000 (0:00:05.237) 0:01:59.218 ******** 2026-01-02 01:04:43.837824 | orchestrator | changed: [testbed-node-0] 2026-01-02 01:04:43.837833 | orchestrator | changed: [testbed-node-1] 2026-01-02 01:04:43.837842 | orchestrator | changed: [testbed-node-5] 2026-01-02 01:04:43.837850 | orchestrator | changed: [testbed-node-3] 2026-01-02 01:04:43.837859 | orchestrator | changed: [testbed-node-4] 2026-01-02 01:04:43.837867 | orchestrator | changed: [testbed-manager] 2026-01-02 01:04:43.837876 | orchestrator | changed: [testbed-node-2] 2026-01-02 01:04:43.837885 | orchestrator | 2026-01-02 01:04:43.837893 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2026-01-02 01:04:43.837902 | orchestrator | Friday 02 January 2026 01:03:56 +0000 (0:00:12.811) 0:02:12.029 ******** 2026-01-02 01:04:43.837911 | orchestrator | changed: [testbed-manager] 2026-01-02 01:04:43.837920 | orchestrator | 2026-01-02 01:04:43.837928 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2026-01-02 01:04:43.837937 | orchestrator | Friday 02 January 2026 01:04:09 +0000 (0:00:13.118) 0:02:25.148 ******** 2026-01-02 01:04:43.837946 | orchestrator | changed: [testbed-node-0] 2026-01-02 01:04:43.837955 | orchestrator | changed: [testbed-node-2] 2026-01-02 01:04:43.837963 | orchestrator | changed: [testbed-node-1] 2026-01-02 01:04:43.837976 | orchestrator | 2026-01-02 01:04:43.837991 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2026-01-02 01:04:43.838006 | orchestrator | Friday 02 January 2026 01:04:19 +0000 (0:00:10.004) 0:02:35.153 ******** 2026-01-02 01:04:43.838069 | orchestrator | changed: [testbed-manager] 2026-01-02 01:04:43.838087 | orchestrator | 2026-01-02 01:04:43.838101 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2026-01-02 01:04:43.838110 | orchestrator | Friday 02 January 2026 01:04:30 +0000 (0:00:10.711) 0:02:45.864 ******** 2026-01-02 01:04:43.838119 | orchestrator | changed: [testbed-node-3] 2026-01-02 01:04:43.838127 | orchestrator | changed: [testbed-node-4] 2026-01-02 01:04:43.838136 | orchestrator | changed: [testbed-node-5] 2026-01-02 01:04:43.838144 | orchestrator | 2026-01-02 01:04:43.838153 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-02 01:04:43.838162 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2026-01-02 01:04:43.838171 | orchestrator | testbed-node-0 : ok=16  changed=11  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-01-02 01:04:43.838180 | orchestrator | testbed-node-1 : ok=16  changed=11  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-01-02 01:04:43.838189 | orchestrator | testbed-node-2 : ok=16  changed=11  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-01-02 01:04:43.838211 | orchestrator | testbed-node-3 : ok=13  changed=8  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2026-01-02 01:04:43.838220 | orchestrator | testbed-node-4 : ok=13  changed=8  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2026-01-02 01:04:43.838229 | orchestrator | testbed-node-5 : ok=13  changed=8  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2026-01-02 01:04:43.838238 | orchestrator | 2026-01-02 01:04:43.838246 | orchestrator | 2026-01-02 01:04:43.838255 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-02 01:04:43.838264 | orchestrator | Friday 02 January 2026 01:04:40 +0000 (0:00:10.588) 0:02:56.453 ******** 2026-01-02 01:04:43.838273 | orchestrator | =============================================================================== 2026-01-02 01:04:43.838281 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 24.03s 2026-01-02 01:04:43.838297 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 14.67s 2026-01-02 01:04:43.838306 | orchestrator | prometheus : Restart prometheus-alertmanager container ----------------- 13.12s 2026-01-02 01:04:43.838314 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 12.81s 2026-01-02 01:04:43.838323 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 12.61s 2026-01-02 01:04:43.838332 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------ 10.71s 2026-01-02 01:04:43.838340 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container ------------- 10.59s 2026-01-02 01:04:43.838349 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container -------------- 10.30s 2026-01-02 01:04:43.838358 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container ------- 10.00s 2026-01-02 01:04:43.838366 | orchestrator | prometheus : Copying over config.json files ----------------------------- 6.18s 2026-01-02 01:04:43.838375 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 5.65s 2026-01-02 01:04:43.838384 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ------------ 5.24s 2026-01-02 01:04:43.838437 | orchestrator | service-check-containers : prometheus | Check containers ---------------- 4.43s 2026-01-02 01:04:43.838449 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 3.42s 2026-01-02 01:04:43.838465 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS key --- 2.76s 2026-01-02 01:04:43.838479 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 2.73s 2026-01-02 01:04:43.838494 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS certificate --- 2.39s 2026-01-02 01:04:43.838508 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 2.33s 2026-01-02 01:04:43.838522 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.87s 2026-01-02 01:04:43.838537 | orchestrator | prometheus : Find prometheus host config overrides ---------------------- 1.80s 2026-01-02 01:04:46.877050 | orchestrator | 2026-01-02 01:04:46 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:04:46.878283 | orchestrator | 2026-01-02 01:04:46 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:04:46.880096 | orchestrator | 2026-01-02 01:04:46 | INFO  | Task 4979e630-3360-4cea-a090-fc1f1ee5eb66 is in state STARTED 2026-01-02 01:04:46.883458 | orchestrator | 2026-01-02 01:04:46 | INFO  | Task 3f9b376f-5903-439b-9bc9-296c6849f6fe is in state STARTED 2026-01-02 01:04:46.883594 | orchestrator | 2026-01-02 01:04:46 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:04:49.933226 | orchestrator | 2026-01-02 01:04:49 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:04:49.935513 | orchestrator | 2026-01-02 01:04:49 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:04:49.936733 | orchestrator | 2026-01-02 01:04:49 | INFO  | Task 4979e630-3360-4cea-a090-fc1f1ee5eb66 is in state STARTED 2026-01-02 01:04:49.938341 | orchestrator | 2026-01-02 01:04:49 | INFO  | Task 3f9b376f-5903-439b-9bc9-296c6849f6fe is in state STARTED 2026-01-02 01:04:49.938382 | orchestrator | 2026-01-02 01:04:49 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:04:52.981184 | orchestrator | 2026-01-02 01:04:52 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:04:52.986285 | orchestrator | 2026-01-02 01:04:52 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:04:52.989322 | orchestrator | 2026-01-02 01:04:52 | INFO  | Task 4979e630-3360-4cea-a090-fc1f1ee5eb66 is in state STARTED 2026-01-02 01:04:52.991734 | orchestrator | 2026-01-02 01:04:52 | INFO  | Task 3f9b376f-5903-439b-9bc9-296c6849f6fe is in state STARTED 2026-01-02 01:04:52.991781 | orchestrator | 2026-01-02 01:04:52 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:04:56.041780 | orchestrator | 2026-01-02 01:04:56 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:04:56.043640 | orchestrator | 2026-01-02 01:04:56 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:04:56.045862 | orchestrator | 2026-01-02 01:04:56 | INFO  | Task 4979e630-3360-4cea-a090-fc1f1ee5eb66 is in state STARTED 2026-01-02 01:04:56.047668 | orchestrator | 2026-01-02 01:04:56 | INFO  | Task 3f9b376f-5903-439b-9bc9-296c6849f6fe is in state STARTED 2026-01-02 01:04:56.048197 | orchestrator | 2026-01-02 01:04:56 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:04:59.092896 | orchestrator | 2026-01-02 01:04:59 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:04:59.095751 | orchestrator | 2026-01-02 01:04:59 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:04:59.098159 | orchestrator | 2026-01-02 01:04:59 | INFO  | Task 4979e630-3360-4cea-a090-fc1f1ee5eb66 is in state STARTED 2026-01-02 01:04:59.100348 | orchestrator | 2026-01-02 01:04:59 | INFO  | Task 3f9b376f-5903-439b-9bc9-296c6849f6fe is in state STARTED 2026-01-02 01:04:59.100908 | orchestrator | 2026-01-02 01:04:59 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:05:02.148369 | orchestrator | 2026-01-02 01:05:02 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:05:02.148688 | orchestrator | 2026-01-02 01:05:02 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:05:02.149819 | orchestrator | 2026-01-02 01:05:02 | INFO  | Task 4979e630-3360-4cea-a090-fc1f1ee5eb66 is in state STARTED 2026-01-02 01:05:02.151929 | orchestrator | 2026-01-02 01:05:02 | INFO  | Task 3f9b376f-5903-439b-9bc9-296c6849f6fe is in state STARTED 2026-01-02 01:05:02.151986 | orchestrator | 2026-01-02 01:05:02 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:05:05.196215 | orchestrator | 2026-01-02 01:05:05 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:05:05.198502 | orchestrator | 2026-01-02 01:05:05 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:05:05.205211 | orchestrator | 2026-01-02 01:05:05 | INFO  | Task 4979e630-3360-4cea-a090-fc1f1ee5eb66 is in state STARTED 2026-01-02 01:05:05.208563 | orchestrator | 2026-01-02 01:05:05 | INFO  | Task 3f9b376f-5903-439b-9bc9-296c6849f6fe is in state STARTED 2026-01-02 01:05:05.210133 | orchestrator | 2026-01-02 01:05:05 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:05:08.262127 | orchestrator | 2026-01-02 01:05:08 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:05:08.263318 | orchestrator | 2026-01-02 01:05:08 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:05:08.266275 | orchestrator | 2026-01-02 01:05:08 | INFO  | Task 4979e630-3360-4cea-a090-fc1f1ee5eb66 is in state STARTED 2026-01-02 01:05:08.268494 | orchestrator | 2026-01-02 01:05:08 | INFO  | Task 3f9b376f-5903-439b-9bc9-296c6849f6fe is in state STARTED 2026-01-02 01:05:08.268528 | orchestrator | 2026-01-02 01:05:08 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:05:11.319347 | orchestrator | 2026-01-02 01:05:11 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:05:11.321618 | orchestrator | 2026-01-02 01:05:11 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:05:11.324404 | orchestrator | 2026-01-02 01:05:11 | INFO  | Task 4979e630-3360-4cea-a090-fc1f1ee5eb66 is in state STARTED 2026-01-02 01:05:11.326670 | orchestrator | 2026-01-02 01:05:11 | INFO  | Task 3f9b376f-5903-439b-9bc9-296c6849f6fe is in state STARTED 2026-01-02 01:05:11.326780 | orchestrator | 2026-01-02 01:05:11 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:05:14.373090 | orchestrator | 2026-01-02 01:05:14 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:05:14.374056 | orchestrator | 2026-01-02 01:05:14 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:05:14.375675 | orchestrator | 2026-01-02 01:05:14 | INFO  | Task 4979e630-3360-4cea-a090-fc1f1ee5eb66 is in state STARTED 2026-01-02 01:05:14.377014 | orchestrator | 2026-01-02 01:05:14 | INFO  | Task 3f9b376f-5903-439b-9bc9-296c6849f6fe is in state STARTED 2026-01-02 01:05:14.377102 | orchestrator | 2026-01-02 01:05:14 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:05:17.429944 | orchestrator | 2026-01-02 01:05:17 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:05:17.433987 | orchestrator | 2026-01-02 01:05:17 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:05:17.436032 | orchestrator | 2026-01-02 01:05:17 | INFO  | Task 4979e630-3360-4cea-a090-fc1f1ee5eb66 is in state STARTED 2026-01-02 01:05:17.439326 | orchestrator | 2026-01-02 01:05:17 | INFO  | Task 3f9b376f-5903-439b-9bc9-296c6849f6fe is in state STARTED 2026-01-02 01:05:17.439410 | orchestrator | 2026-01-02 01:05:17 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:05:20.485400 | orchestrator | 2026-01-02 01:05:20 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:05:20.486516 | orchestrator | 2026-01-02 01:05:20 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:05:20.489658 | orchestrator | 2026-01-02 01:05:20 | INFO  | Task 4979e630-3360-4cea-a090-fc1f1ee5eb66 is in state STARTED 2026-01-02 01:05:20.490787 | orchestrator | 2026-01-02 01:05:20 | INFO  | Task 3f9b376f-5903-439b-9bc9-296c6849f6fe is in state STARTED 2026-01-02 01:05:20.491009 | orchestrator | 2026-01-02 01:05:20 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:05:23.538007 | orchestrator | 2026-01-02 01:05:23 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:05:23.539158 | orchestrator | 2026-01-02 01:05:23 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:05:23.541362 | orchestrator | 2026-01-02 01:05:23 | INFO  | Task 4979e630-3360-4cea-a090-fc1f1ee5eb66 is in state STARTED 2026-01-02 01:05:23.544335 | orchestrator | 2026-01-02 01:05:23 | INFO  | Task 3f9b376f-5903-439b-9bc9-296c6849f6fe is in state STARTED 2026-01-02 01:05:23.544462 | orchestrator | 2026-01-02 01:05:23 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:05:26.585174 | orchestrator | 2026-01-02 01:05:26 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:05:26.586989 | orchestrator | 2026-01-02 01:05:26 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:05:26.588490 | orchestrator | 2026-01-02 01:05:26 | INFO  | Task 4979e630-3360-4cea-a090-fc1f1ee5eb66 is in state STARTED 2026-01-02 01:05:26.590006 | orchestrator | 2026-01-02 01:05:26 | INFO  | Task 3f9b376f-5903-439b-9bc9-296c6849f6fe is in state STARTED 2026-01-02 01:05:26.590083 | orchestrator | 2026-01-02 01:05:26 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:05:29.633691 | orchestrator | 2026-01-02 01:05:29 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:05:29.635845 | orchestrator | 2026-01-02 01:05:29 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:05:29.637241 | orchestrator | 2026-01-02 01:05:29 | INFO  | Task 4979e630-3360-4cea-a090-fc1f1ee5eb66 is in state STARTED 2026-01-02 01:05:29.638624 | orchestrator | 2026-01-02 01:05:29 | INFO  | Task 3f9b376f-5903-439b-9bc9-296c6849f6fe is in state STARTED 2026-01-02 01:05:29.638894 | orchestrator | 2026-01-02 01:05:29 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:05:32.682050 | orchestrator | 2026-01-02 01:05:32 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:05:32.684531 | orchestrator | 2026-01-02 01:05:32 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:05:32.686373 | orchestrator | 2026-01-02 01:05:32 | INFO  | Task 4979e630-3360-4cea-a090-fc1f1ee5eb66 is in state STARTED 2026-01-02 01:05:32.687990 | orchestrator | 2026-01-02 01:05:32 | INFO  | Task 3f9b376f-5903-439b-9bc9-296c6849f6fe is in state STARTED 2026-01-02 01:05:32.688024 | orchestrator | 2026-01-02 01:05:32 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:05:35.734181 | orchestrator | 2026-01-02 01:05:35 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:05:35.735806 | orchestrator | 2026-01-02 01:05:35 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:05:35.737351 | orchestrator | 2026-01-02 01:05:35 | INFO  | Task 4979e630-3360-4cea-a090-fc1f1ee5eb66 is in state STARTED 2026-01-02 01:05:35.739096 | orchestrator | 2026-01-02 01:05:35 | INFO  | Task 3f9b376f-5903-439b-9bc9-296c6849f6fe is in state STARTED 2026-01-02 01:05:35.739224 | orchestrator | 2026-01-02 01:05:35 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:05:38.792340 | orchestrator | 2026-01-02 01:05:38 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:05:38.795389 | orchestrator | 2026-01-02 01:05:38 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:05:38.797263 | orchestrator | 2026-01-02 01:05:38 | INFO  | Task 4979e630-3360-4cea-a090-fc1f1ee5eb66 is in state STARTED 2026-01-02 01:05:38.799301 | orchestrator | 2026-01-02 01:05:38 | INFO  | Task 3f9b376f-5903-439b-9bc9-296c6849f6fe is in state STARTED 2026-01-02 01:05:38.799339 | orchestrator | 2026-01-02 01:05:38 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:05:41.849337 | orchestrator | 2026-01-02 01:05:41 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:05:41.851151 | orchestrator | 2026-01-02 01:05:41 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:05:41.852387 | orchestrator | 2026-01-02 01:05:41 | INFO  | Task 4979e630-3360-4cea-a090-fc1f1ee5eb66 is in state STARTED 2026-01-02 01:05:41.855155 | orchestrator | 2026-01-02 01:05:41 | INFO  | Task 3f9b376f-5903-439b-9bc9-296c6849f6fe is in state STARTED 2026-01-02 01:05:41.855376 | orchestrator | 2026-01-02 01:05:41 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:05:44.912870 | orchestrator | 2026-01-02 01:05:44 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:05:44.917884 | orchestrator | 2026-01-02 01:05:44 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:05:44.919206 | orchestrator | 2026-01-02 01:05:44 | INFO  | Task 4979e630-3360-4cea-a090-fc1f1ee5eb66 is in state STARTED 2026-01-02 01:05:44.920756 | orchestrator | 2026-01-02 01:05:44 | INFO  | Task 3f9b376f-5903-439b-9bc9-296c6849f6fe is in state STARTED 2026-01-02 01:05:44.920891 | orchestrator | 2026-01-02 01:05:44 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:05:47.967077 | orchestrator | 2026-01-02 01:05:47 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:05:47.969094 | orchestrator | 2026-01-02 01:05:47 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:05:47.970558 | orchestrator | 2026-01-02 01:05:47 | INFO  | Task 4979e630-3360-4cea-a090-fc1f1ee5eb66 is in state STARTED 2026-01-02 01:05:47.972176 | orchestrator | 2026-01-02 01:05:47 | INFO  | Task 3f9b376f-5903-439b-9bc9-296c6849f6fe is in state STARTED 2026-01-02 01:05:47.972210 | orchestrator | 2026-01-02 01:05:47 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:05:51.017882 | orchestrator | 2026-01-02 01:05:51 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:05:51.020126 | orchestrator | 2026-01-02 01:05:51 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:05:51.021674 | orchestrator | 2026-01-02 01:05:51 | INFO  | Task 4979e630-3360-4cea-a090-fc1f1ee5eb66 is in state STARTED 2026-01-02 01:05:51.023320 | orchestrator | 2026-01-02 01:05:51 | INFO  | Task 3f9b376f-5903-439b-9bc9-296c6849f6fe is in state STARTED 2026-01-02 01:05:51.023382 | orchestrator | 2026-01-02 01:05:51 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:05:54.070716 | orchestrator | 2026-01-02 01:05:54 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:05:54.072078 | orchestrator | 2026-01-02 01:05:54 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:05:54.075276 | orchestrator | 2026-01-02 01:05:54 | INFO  | Task 4979e630-3360-4cea-a090-fc1f1ee5eb66 is in state STARTED 2026-01-02 01:05:54.077636 | orchestrator | 2026-01-02 01:05:54 | INFO  | Task 3f9b376f-5903-439b-9bc9-296c6849f6fe is in state STARTED 2026-01-02 01:05:54.077746 | orchestrator | 2026-01-02 01:05:54 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:05:57.120733 | orchestrator | 2026-01-02 01:05:57 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:05:57.122347 | orchestrator | 2026-01-02 01:05:57 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:05:57.124680 | orchestrator | 2026-01-02 01:05:57 | INFO  | Task 4979e630-3360-4cea-a090-fc1f1ee5eb66 is in state SUCCESS 2026-01-02 01:05:57.127056 | orchestrator | 2026-01-02 01:05:57 | INFO  | Task 3f9b376f-5903-439b-9bc9-296c6849f6fe is in state STARTED 2026-01-02 01:05:57.127173 | orchestrator | 2026-01-02 01:05:57 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:06:00.168439 | orchestrator | 2026-01-02 01:06:00 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:06:00.169470 | orchestrator | 2026-01-02 01:06:00 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:06:00.171407 | orchestrator | 2026-01-02 01:06:00 | INFO  | Task 3f9b376f-5903-439b-9bc9-296c6849f6fe is in state STARTED 2026-01-02 01:06:00.171457 | orchestrator | 2026-01-02 01:06:00 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:06:03.213910 | orchestrator | 2026-01-02 01:06:03 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:06:03.216316 | orchestrator | 2026-01-02 01:06:03 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:06:03.219428 | orchestrator | 2026-01-02 01:06:03 | INFO  | Task 3f9b376f-5903-439b-9bc9-296c6849f6fe is in state STARTED 2026-01-02 01:06:03.219552 | orchestrator | 2026-01-02 01:06:03 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:06:06.262861 | orchestrator | 2026-01-02 01:06:06 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:06:06.264713 | orchestrator | 2026-01-02 01:06:06 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:06:06.266732 | orchestrator | 2026-01-02 01:06:06 | INFO  | Task 3f9b376f-5903-439b-9bc9-296c6849f6fe is in state STARTED 2026-01-02 01:06:06.266767 | orchestrator | 2026-01-02 01:06:06 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:06:09.303273 | orchestrator | 2026-01-02 01:06:09 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:06:09.304335 | orchestrator | 2026-01-02 01:06:09 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:06:09.306972 | orchestrator | 2026-01-02 01:06:09 | INFO  | Task 3f9b376f-5903-439b-9bc9-296c6849f6fe is in state STARTED 2026-01-02 01:06:09.307123 | orchestrator | 2026-01-02 01:06:09 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:06:12.355962 | orchestrator | 2026-01-02 01:06:12 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:06:12.357554 | orchestrator | 2026-01-02 01:06:12 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:06:12.360014 | orchestrator | 2026-01-02 01:06:12 | INFO  | Task 3f9b376f-5903-439b-9bc9-296c6849f6fe is in state STARTED 2026-01-02 01:06:12.360116 | orchestrator | 2026-01-02 01:06:12 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:06:15.409807 | orchestrator | 2026-01-02 01:06:15 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:06:15.412773 | orchestrator | 2026-01-02 01:06:15 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:06:15.414999 | orchestrator | 2026-01-02 01:06:15 | INFO  | Task 3f9b376f-5903-439b-9bc9-296c6849f6fe is in state STARTED 2026-01-02 01:06:15.415143 | orchestrator | 2026-01-02 01:06:15 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:06:18.465149 | orchestrator | 2026-01-02 01:06:18 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:06:18.468598 | orchestrator | 2026-01-02 01:06:18 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:06:18.472786 | orchestrator | 2026-01-02 01:06:18 | INFO  | Task 3f9b376f-5903-439b-9bc9-296c6849f6fe is in state STARTED 2026-01-02 01:06:18.472899 | orchestrator | 2026-01-02 01:06:18 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:06:21.516967 | orchestrator | 2026-01-02 01:06:21 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:06:21.519137 | orchestrator | 2026-01-02 01:06:21 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:06:21.520552 | orchestrator | 2026-01-02 01:06:21 | INFO  | Task 3f9b376f-5903-439b-9bc9-296c6849f6fe is in state STARTED 2026-01-02 01:06:21.520582 | orchestrator | 2026-01-02 01:06:21 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:06:24.569614 | orchestrator | 2026-01-02 01:06:24 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:06:24.572054 | orchestrator | 2026-01-02 01:06:24 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:06:24.574533 | orchestrator | 2026-01-02 01:06:24 | INFO  | Task 3f9b376f-5903-439b-9bc9-296c6849f6fe is in state STARTED 2026-01-02 01:06:24.574588 | orchestrator | 2026-01-02 01:06:24 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:06:27.618607 | orchestrator | 2026-01-02 01:06:27 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:06:27.621276 | orchestrator | 2026-01-02 01:06:27 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:06:27.623268 | orchestrator | 2026-01-02 01:06:27 | INFO  | Task 3f9b376f-5903-439b-9bc9-296c6849f6fe is in state STARTED 2026-01-02 01:06:27.623827 | orchestrator | 2026-01-02 01:06:27 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:06:30.670122 | orchestrator | 2026-01-02 01:06:30 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:06:30.672467 | orchestrator | 2026-01-02 01:06:30 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:06:30.674649 | orchestrator | 2026-01-02 01:06:30 | INFO  | Task 3f9b376f-5903-439b-9bc9-296c6849f6fe is in state STARTED 2026-01-02 01:06:30.674705 | orchestrator | 2026-01-02 01:06:30 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:06:33.723354 | orchestrator | 2026-01-02 01:06:33 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:06:33.724759 | orchestrator | 2026-01-02 01:06:33 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:06:33.727024 | orchestrator | 2026-01-02 01:06:33 | INFO  | Task 3f9b376f-5903-439b-9bc9-296c6849f6fe is in state STARTED 2026-01-02 01:06:33.727225 | orchestrator | 2026-01-02 01:06:33 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:06:36.776911 | orchestrator | 2026-01-02 01:06:36 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:06:36.778702 | orchestrator | 2026-01-02 01:06:36 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:06:36.780131 | orchestrator | 2026-01-02 01:06:36 | INFO  | Task 3f9b376f-5903-439b-9bc9-296c6849f6fe is in state STARTED 2026-01-02 01:06:36.780211 | orchestrator | 2026-01-02 01:06:36 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:06:39.827090 | orchestrator | 2026-01-02 01:06:39 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:06:39.831845 | orchestrator | 2026-01-02 01:06:39 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:06:39.832797 | orchestrator | 2026-01-02 01:06:39 | INFO  | Task 3f9b376f-5903-439b-9bc9-296c6849f6fe is in state SUCCESS 2026-01-02 01:06:39.834669 | orchestrator | 2026-01-02 01:06:39.834710 | orchestrator | 2026-01-02 01:06:39.834716 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2026-01-02 01:06:39.834721 | orchestrator | 2026-01-02 01:06:39.834726 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2026-01-02 01:06:39.834748 | orchestrator | Friday 02 January 2026 01:00:10 +0000 (0:00:00.092) 0:00:00.092 ******** 2026-01-02 01:06:39.834753 | orchestrator | changed: [localhost] 2026-01-02 01:06:39.834758 | orchestrator | 2026-01-02 01:06:39.834762 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2026-01-02 01:06:39.834766 | orchestrator | Friday 02 January 2026 01:00:11 +0000 (0:00:00.879) 0:00:00.972 ******** 2026-01-02 01:06:39.834770 | orchestrator | 2026-01-02 01:06:39.834774 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2026-01-02 01:06:39.834778 | orchestrator | 2026-01-02 01:06:39.834782 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2026-01-02 01:06:39.834786 | orchestrator | 2026-01-02 01:06:39.834790 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2026-01-02 01:06:39.834794 | orchestrator | 2026-01-02 01:06:39.834798 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2026-01-02 01:06:39.834801 | orchestrator | 2026-01-02 01:06:39.834805 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2026-01-02 01:06:39.834809 | orchestrator | 2026-01-02 01:06:39.834813 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2026-01-02 01:06:39.834817 | orchestrator | 2026-01-02 01:06:39.834820 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2026-01-02 01:06:39.834824 | orchestrator | changed: [localhost] 2026-01-02 01:06:39.834828 | orchestrator | 2026-01-02 01:06:39.834832 | orchestrator | TASK [Download ironic-agent kernel] ******************************************** 2026-01-02 01:06:39.834836 | orchestrator | Friday 02 January 2026 01:05:41 +0000 (0:05:30.452) 0:05:31.424 ******** 2026-01-02 01:06:39.834840 | orchestrator | changed: [localhost] 2026-01-02 01:06:39.834843 | orchestrator | 2026-01-02 01:06:39.834847 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-02 01:06:39.834851 | orchestrator | 2026-01-02 01:06:39.834855 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-02 01:06:39.834858 | orchestrator | Friday 02 January 2026 01:05:54 +0000 (0:00:12.766) 0:05:44.191 ******** 2026-01-02 01:06:39.834862 | orchestrator | ok: [testbed-node-0] 2026-01-02 01:06:39.834866 | orchestrator | ok: [testbed-node-1] 2026-01-02 01:06:39.834870 | orchestrator | ok: [testbed-node-2] 2026-01-02 01:06:39.834874 | orchestrator | 2026-01-02 01:06:39.834877 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-02 01:06:39.834881 | orchestrator | Friday 02 January 2026 01:05:54 +0000 (0:00:00.330) 0:05:44.521 ******** 2026-01-02 01:06:39.834885 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_ironic_True 2026-01-02 01:06:39.834889 | orchestrator | ok: [testbed-node-0] => (item=enable_ironic_False) 2026-01-02 01:06:39.834894 | orchestrator | ok: [testbed-node-1] => (item=enable_ironic_False) 2026-01-02 01:06:39.834901 | orchestrator | ok: [testbed-node-2] => (item=enable_ironic_False) 2026-01-02 01:06:39.834908 | orchestrator | 2026-01-02 01:06:39.834914 | orchestrator | PLAY [Apply role ironic] ******************************************************* 2026-01-02 01:06:39.834920 | orchestrator | skipping: no hosts matched 2026-01-02 01:06:39.834928 | orchestrator | 2026-01-02 01:06:39.834947 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-02 01:06:39.834954 | orchestrator | localhost : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-02 01:06:39.834963 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-02 01:06:39.834972 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-02 01:06:39.834978 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-02 01:06:39.834984 | orchestrator | 2026-01-02 01:06:39.834991 | orchestrator | 2026-01-02 01:06:39.834998 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-02 01:06:39.835006 | orchestrator | Friday 02 January 2026 01:05:55 +0000 (0:00:00.559) 0:05:45.081 ******** 2026-01-02 01:06:39.835010 | orchestrator | =============================================================================== 2026-01-02 01:06:39.835014 | orchestrator | Download ironic-agent initramfs --------------------------------------- 330.45s 2026-01-02 01:06:39.835017 | orchestrator | Download ironic-agent kernel ------------------------------------------- 12.77s 2026-01-02 01:06:39.835021 | orchestrator | Ensure the destination directory exists --------------------------------- 0.88s 2026-01-02 01:06:39.835025 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.56s 2026-01-02 01:06:39.835029 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.33s 2026-01-02 01:06:39.835032 | orchestrator | 2026-01-02 01:06:39.835036 | orchestrator | 2026-01-02 01:06:39.835040 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-02 01:06:39.835044 | orchestrator | 2026-01-02 01:06:39.835047 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-02 01:06:39.835051 | orchestrator | Friday 02 January 2026 01:04:45 +0000 (0:00:00.255) 0:00:00.255 ******** 2026-01-02 01:06:39.835055 | orchestrator | ok: [testbed-node-0] 2026-01-02 01:06:39.835059 | orchestrator | ok: [testbed-node-1] 2026-01-02 01:06:39.835062 | orchestrator | ok: [testbed-node-2] 2026-01-02 01:06:39.835066 | orchestrator | 2026-01-02 01:06:39.835070 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-02 01:06:39.835074 | orchestrator | Friday 02 January 2026 01:04:46 +0000 (0:00:00.297) 0:00:00.552 ******** 2026-01-02 01:06:39.835078 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2026-01-02 01:06:39.835082 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2026-01-02 01:06:39.835094 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2026-01-02 01:06:39.835098 | orchestrator | 2026-01-02 01:06:39.835102 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2026-01-02 01:06:39.835106 | orchestrator | 2026-01-02 01:06:39.835110 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-01-02 01:06:39.835114 | orchestrator | Friday 02 January 2026 01:04:46 +0000 (0:00:00.421) 0:00:00.974 ******** 2026-01-02 01:06:39.835117 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 01:06:39.835122 | orchestrator | 2026-01-02 01:06:39.835126 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2026-01-02 01:06:39.835133 | orchestrator | Friday 02 January 2026 01:04:46 +0000 (0:00:00.509) 0:00:01.484 ******** 2026-01-02 01:06:39.835143 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-02 01:06:39.835152 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-02 01:06:39.835166 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-02 01:06:39.835170 | orchestrator | 2026-01-02 01:06:39.835174 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2026-01-02 01:06:39.835178 | orchestrator | Friday 02 January 2026 01:04:47 +0000 (0:00:00.787) 0:00:02.272 ******** 2026-01-02 01:06:39.835182 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-02 01:06:39.835185 | orchestrator | 2026-01-02 01:06:39.835189 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-01-02 01:06:39.835193 | orchestrator | Friday 02 January 2026 01:04:48 +0000 (0:00:00.798) 0:00:03.070 ******** 2026-01-02 01:06:39.835197 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-02 01:06:39.835201 | orchestrator | 2026-01-02 01:06:39.835205 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2026-01-02 01:06:39.835208 | orchestrator | Friday 02 January 2026 01:04:49 +0000 (0:00:00.737) 0:00:03.808 ******** 2026-01-02 01:06:39.835216 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-02 01:06:39.835220 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-02 01:06:39.835225 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-02 01:06:39.835233 | orchestrator | 2026-01-02 01:06:39.835237 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2026-01-02 01:06:39.835241 | orchestrator | Friday 02 January 2026 01:04:50 +0000 (0:00:01.339) 0:00:05.147 ******** 2026-01-02 01:06:39.835248 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-01-02 01:06:39.835253 | orchestrator | skipping: [testbed-node-0] 2026-01-02 01:06:39.835258 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-01-02 01:06:39.835263 | orchestrator | skipping: [testbed-node-1] 2026-01-02 01:06:39.835270 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-01-02 01:06:39.835275 | orchestrator | skipping: [testbed-node-2] 2026-01-02 01:06:39.835280 | orchestrator | 2026-01-02 01:06:39.835284 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2026-01-02 01:06:39.835289 | orchestrator | Friday 02 January 2026 01:04:51 +0000 (0:00:00.467) 0:00:05.614 ******** 2026-01-02 01:06:39.835293 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-01-02 01:06:39.835298 | orchestrator | skipping: [testbed-node-0] 2026-01-02 01:06:39.835307 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-01-02 01:06:39.835311 | orchestrator | skipping: [testbed-node-1] 2026-01-02 01:06:39.835319 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-01-02 01:06:39.835324 | orchestrator | skipping: [testbed-node-2] 2026-01-02 01:06:39.835329 | orchestrator | 2026-01-02 01:06:39.835333 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2026-01-02 01:06:39.835338 | orchestrator | Friday 02 January 2026 01:04:51 +0000 (0:00:00.869) 0:00:06.484 ******** 2026-01-02 01:06:39.835343 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-02 01:06:39.835350 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-02 01:06:39.835355 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-02 01:06:39.835363 | orchestrator | 2026-01-02 01:06:39.835367 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2026-01-02 01:06:39.835372 | orchestrator | Friday 02 January 2026 01:04:53 +0000 (0:00:01.257) 0:00:07.741 ******** 2026-01-02 01:06:39.835377 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-02 01:06:39.835384 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-02 01:06:39.835389 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-02 01:06:39.835394 | orchestrator | 2026-01-02 01:06:39.835398 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2026-01-02 01:06:39.835403 | orchestrator | Friday 02 January 2026 01:04:54 +0000 (0:00:01.362) 0:00:09.104 ******** 2026-01-02 01:06:39.835408 | orchestrator | skipping: [testbed-node-0] 2026-01-02 01:06:39.835412 | orchestrator | skipping: [testbed-node-1] 2026-01-02 01:06:39.835417 | orchestrator | skipping: [testbed-node-2] 2026-01-02 01:06:39.835422 | orchestrator | 2026-01-02 01:06:39.835428 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2026-01-02 01:06:39.835435 | orchestrator | Friday 02 January 2026 01:04:55 +0000 (0:00:00.500) 0:00:09.604 ******** 2026-01-02 01:06:39.835442 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-01-02 01:06:39.835450 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-01-02 01:06:39.835457 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-01-02 01:06:39.835464 | orchestrator | 2026-01-02 01:06:39.835474 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2026-01-02 01:06:39.835481 | orchestrator | Friday 02 January 2026 01:04:56 +0000 (0:00:01.384) 0:00:10.989 ******** 2026-01-02 01:06:39.835492 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-01-02 01:06:39.835497 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-01-02 01:06:39.835501 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-01-02 01:06:39.835506 | orchestrator | 2026-01-02 01:06:39.835510 | orchestrator | TASK [grafana : Check if the folder for custom grafana dashboards exists] ****** 2026-01-02 01:06:39.835536 | orchestrator | Friday 02 January 2026 01:04:57 +0000 (0:00:01.234) 0:00:12.224 ******** 2026-01-02 01:06:39.835542 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-02 01:06:39.835546 | orchestrator | 2026-01-02 01:06:39.835550 | orchestrator | TASK [grafana : Remove templated Grafana dashboards] *************************** 2026-01-02 01:06:39.835555 | orchestrator | Friday 02 January 2026 01:04:58 +0000 (0:00:00.714) 0:00:12.938 ******** 2026-01-02 01:06:39.835559 | orchestrator | ok: [testbed-node-0] 2026-01-02 01:06:39.835564 | orchestrator | ok: [testbed-node-1] 2026-01-02 01:06:39.835568 | orchestrator | ok: [testbed-node-2] 2026-01-02 01:06:39.835573 | orchestrator | 2026-01-02 01:06:39.835578 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2026-01-02 01:06:39.835586 | orchestrator | Friday 02 January 2026 01:04:59 +0000 (0:00:00.733) 0:00:13.672 ******** 2026-01-02 01:06:39.835593 | orchestrator | changed: [testbed-node-0] 2026-01-02 01:06:39.835599 | orchestrator | changed: [testbed-node-1] 2026-01-02 01:06:39.835605 | orchestrator | changed: [testbed-node-2] 2026-01-02 01:06:39.835611 | orchestrator | 2026-01-02 01:06:39.835618 | orchestrator | TASK [service-check-containers : grafana | Check containers] ******************* 2026-01-02 01:06:39.835625 | orchestrator | Friday 02 January 2026 01:05:00 +0000 (0:00:01.477) 0:00:15.149 ******** 2026-01-02 01:06:39.835634 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-02 01:06:39.835642 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-02 01:06:39.835648 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-02 01:06:39.835660 | orchestrator | 2026-01-02 01:06:39.835666 | orchestrator | TASK [service-check-containers : grafana | Notify handlers to restart containers] *** 2026-01-02 01:06:39.835673 | orchestrator | Friday 02 January 2026 01:05:01 +0000 (0:00:01.086) 0:00:16.235 ******** 2026-01-02 01:06:39.835679 | orchestrator | changed: [testbed-node-0] => { 2026-01-02 01:06:39.835686 | orchestrator |  "msg": "Notifying handlers" 2026-01-02 01:06:39.835692 | orchestrator | } 2026-01-02 01:06:39.835699 | orchestrator | changed: [testbed-node-1] => { 2026-01-02 01:06:39.835705 | orchestrator |  "msg": "Notifying handlers" 2026-01-02 01:06:39.835711 | orchestrator | } 2026-01-02 01:06:39.835720 | orchestrator | changed: [testbed-node-2] => { 2026-01-02 01:06:39.835724 | orchestrator |  "msg": "Notifying handlers" 2026-01-02 01:06:39.835728 | orchestrator | } 2026-01-02 01:06:39.835732 | orchestrator | 2026-01-02 01:06:39.835736 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-01-02 01:06:39.835740 | orchestrator | Friday 02 January 2026 01:05:02 +0000 (0:00:00.328) 0:00:16.563 ******** 2026-01-02 01:06:39.835744 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-01-02 01:06:39.835748 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-01-02 01:06:39.835752 | orchestrator | skipping: [testbed-node-0] 2026-01-02 01:06:39.835756 | orchestrator | skipping: [testbed-node-1] 2026-01-02 01:06:39.835766 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-01-02 01:06:39.835770 | orchestrator | skipping: [testbed-node-2] 2026-01-02 01:06:39.835774 | orchestrator | 2026-01-02 01:06:39.835778 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2026-01-02 01:06:39.835781 | orchestrator | Friday 02 January 2026 01:05:02 +0000 (0:00:00.735) 0:00:17.298 ******** 2026-01-02 01:06:39.835790 | orchestrator | changed: [testbed-node-0] 2026-01-02 01:06:39.835793 | orchestrator | 2026-01-02 01:06:39.835797 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2026-01-02 01:06:39.835801 | orchestrator | Friday 02 January 2026 01:05:05 +0000 (0:00:02.315) 0:00:19.614 ******** 2026-01-02 01:06:39.835805 | orchestrator | changed: [testbed-node-0] 2026-01-02 01:06:39.835808 | orchestrator | 2026-01-02 01:06:39.835812 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-01-02 01:06:39.835816 | orchestrator | Friday 02 January 2026 01:05:07 +0000 (0:00:02.254) 0:00:21.869 ******** 2026-01-02 01:06:39.835820 | orchestrator | 2026-01-02 01:06:39.835823 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-01-02 01:06:39.835827 | orchestrator | Friday 02 January 2026 01:05:07 +0000 (0:00:00.063) 0:00:21.933 ******** 2026-01-02 01:06:39.835831 | orchestrator | 2026-01-02 01:06:39.835835 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-01-02 01:06:39.835838 | orchestrator | Friday 02 January 2026 01:05:07 +0000 (0:00:00.061) 0:00:21.994 ******** 2026-01-02 01:06:39.835842 | orchestrator | 2026-01-02 01:06:39.835846 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2026-01-02 01:06:39.835850 | orchestrator | Friday 02 January 2026 01:05:07 +0000 (0:00:00.071) 0:00:22.065 ******** 2026-01-02 01:06:39.835854 | orchestrator | skipping: [testbed-node-1] 2026-01-02 01:06:39.835857 | orchestrator | skipping: [testbed-node-2] 2026-01-02 01:06:39.835861 | orchestrator | changed: [testbed-node-0] 2026-01-02 01:06:39.835865 | orchestrator | 2026-01-02 01:06:39.835869 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2026-01-02 01:06:39.835873 | orchestrator | Friday 02 January 2026 01:05:09 +0000 (0:00:01.768) 0:00:23.834 ******** 2026-01-02 01:06:39.835879 | orchestrator | skipping: [testbed-node-1] 2026-01-02 01:06:39.835886 | orchestrator | skipping: [testbed-node-2] 2026-01-02 01:06:39.835896 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2026-01-02 01:06:39.835904 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2026-01-02 01:06:39.835910 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (10 retries left). 2026-01-02 01:06:39.835917 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (9 retries left). 2026-01-02 01:06:39.835923 | orchestrator | ok: [testbed-node-0] 2026-01-02 01:06:39.835930 | orchestrator | 2026-01-02 01:06:39.835936 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2026-01-02 01:06:39.835942 | orchestrator | Friday 02 January 2026 01:06:00 +0000 (0:00:50.765) 0:01:14.600 ******** 2026-01-02 01:06:39.835948 | orchestrator | skipping: [testbed-node-0] 2026-01-02 01:06:39.835954 | orchestrator | changed: [testbed-node-1] 2026-01-02 01:06:39.835960 | orchestrator | changed: [testbed-node-2] 2026-01-02 01:06:39.835966 | orchestrator | 2026-01-02 01:06:39.835971 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2026-01-02 01:06:39.835977 | orchestrator | Friday 02 January 2026 01:06:31 +0000 (0:00:31.548) 0:01:46.148 ******** 2026-01-02 01:06:39.835984 | orchestrator | ok: [testbed-node-0] 2026-01-02 01:06:39.835991 | orchestrator | 2026-01-02 01:06:39.835997 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2026-01-02 01:06:39.836004 | orchestrator | Friday 02 January 2026 01:06:33 +0000 (0:00:02.336) 0:01:48.485 ******** 2026-01-02 01:06:39.836010 | orchestrator | skipping: [testbed-node-0] 2026-01-02 01:06:39.836016 | orchestrator | skipping: [testbed-node-1] 2026-01-02 01:06:39.836022 | orchestrator | skipping: [testbed-node-2] 2026-01-02 01:06:39.836029 | orchestrator | 2026-01-02 01:06:39.836035 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2026-01-02 01:06:39.836042 | orchestrator | Friday 02 January 2026 01:06:34 +0000 (0:00:00.297) 0:01:48.782 ******** 2026-01-02 01:06:39.836050 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2026-01-02 01:06:39.836065 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2026-01-02 01:06:39.836073 | orchestrator | 2026-01-02 01:06:39.836079 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2026-01-02 01:06:39.836086 | orchestrator | Friday 02 January 2026 01:06:36 +0000 (0:00:02.487) 0:01:51.269 ******** 2026-01-02 01:06:39.836092 | orchestrator | skipping: [testbed-node-0] 2026-01-02 01:06:39.836098 | orchestrator | 2026-01-02 01:06:39.836110 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-02 01:06:39.836115 | orchestrator | testbed-node-0 : ok=22  changed=13  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-02 01:06:39.836119 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-02 01:06:39.836123 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-02 01:06:39.836127 | orchestrator | 2026-01-02 01:06:39.836131 | orchestrator | 2026-01-02 01:06:39.836134 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-02 01:06:39.836139 | orchestrator | Friday 02 January 2026 01:06:36 +0000 (0:00:00.257) 0:01:51.527 ******** 2026-01-02 01:06:39.836145 | orchestrator | =============================================================================== 2026-01-02 01:06:39.836151 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 50.77s 2026-01-02 01:06:39.836157 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 31.55s 2026-01-02 01:06:39.836163 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.49s 2026-01-02 01:06:39.836169 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.34s 2026-01-02 01:06:39.836175 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.32s 2026-01-02 01:06:39.836182 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.25s 2026-01-02 01:06:39.836188 | orchestrator | grafana : Restart first grafana container ------------------------------- 1.77s 2026-01-02 01:06:39.836195 | orchestrator | grafana : Copying over custom dashboards -------------------------------- 1.48s 2026-01-02 01:06:39.836201 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.38s 2026-01-02 01:06:39.836208 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.36s 2026-01-02 01:06:39.836214 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.34s 2026-01-02 01:06:39.836220 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.26s 2026-01-02 01:06:39.836227 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.23s 2026-01-02 01:06:39.836233 | orchestrator | service-check-containers : grafana | Check containers ------------------- 1.09s 2026-01-02 01:06:39.836244 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 0.87s 2026-01-02 01:06:39.836251 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 0.80s 2026-01-02 01:06:39.836257 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 0.79s 2026-01-02 01:06:39.836264 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.74s 2026-01-02 01:06:39.836270 | orchestrator | service-check-containers : Include tasks -------------------------------- 0.74s 2026-01-02 01:06:39.836281 | orchestrator | grafana : Remove templated Grafana dashboards --------------------------- 0.73s 2026-01-02 01:06:39.836314 | orchestrator | 2026-01-02 01:06:39 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:06:42.878191 | orchestrator | 2026-01-02 01:06:42 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:06:42.880129 | orchestrator | 2026-01-02 01:06:42 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:06:42.880599 | orchestrator | 2026-01-02 01:06:42 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:06:45.918412 | orchestrator | 2026-01-02 01:06:45 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:06:45.919845 | orchestrator | 2026-01-02 01:06:45 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:06:45.919896 | orchestrator | 2026-01-02 01:06:45 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:06:48.964725 | orchestrator | 2026-01-02 01:06:48 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:06:48.966349 | orchestrator | 2026-01-02 01:06:48 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:06:48.966867 | orchestrator | 2026-01-02 01:06:48 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:06:52.022716 | orchestrator | 2026-01-02 01:06:52 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:06:52.022807 | orchestrator | 2026-01-02 01:06:52 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:06:52.022818 | orchestrator | 2026-01-02 01:06:52 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:06:55.063044 | orchestrator | 2026-01-02 01:06:55 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:06:55.064060 | orchestrator | 2026-01-02 01:06:55 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:06:55.064279 | orchestrator | 2026-01-02 01:06:55 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:06:58.106446 | orchestrator | 2026-01-02 01:06:58 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:06:58.108333 | orchestrator | 2026-01-02 01:06:58 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:06:58.108453 | orchestrator | 2026-01-02 01:06:58 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:07:01.151276 | orchestrator | 2026-01-02 01:07:01 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:07:01.152503 | orchestrator | 2026-01-02 01:07:01 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:07:01.152952 | orchestrator | 2026-01-02 01:07:01 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:07:04.196611 | orchestrator | 2026-01-02 01:07:04 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:07:04.198419 | orchestrator | 2026-01-02 01:07:04 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:07:04.198481 | orchestrator | 2026-01-02 01:07:04 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:07:07.247445 | orchestrator | 2026-01-02 01:07:07 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:07:07.249108 | orchestrator | 2026-01-02 01:07:07 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:07:07.249147 | orchestrator | 2026-01-02 01:07:07 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:07:10.294009 | orchestrator | 2026-01-02 01:07:10 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:07:10.296025 | orchestrator | 2026-01-02 01:07:10 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:07:10.296088 | orchestrator | 2026-01-02 01:07:10 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:07:13.339520 | orchestrator | 2026-01-02 01:07:13 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:07:13.340763 | orchestrator | 2026-01-02 01:07:13 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:07:13.340807 | orchestrator | 2026-01-02 01:07:13 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:07:16.391908 | orchestrator | 2026-01-02 01:07:16 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:07:16.394341 | orchestrator | 2026-01-02 01:07:16 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:07:16.394407 | orchestrator | 2026-01-02 01:07:16 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:07:19.441071 | orchestrator | 2026-01-02 01:07:19 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:07:19.442721 | orchestrator | 2026-01-02 01:07:19 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:07:19.442796 | orchestrator | 2026-01-02 01:07:19 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:07:22.494774 | orchestrator | 2026-01-02 01:07:22 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:07:22.496194 | orchestrator | 2026-01-02 01:07:22 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:07:22.496266 | orchestrator | 2026-01-02 01:07:22 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:07:25.540714 | orchestrator | 2026-01-02 01:07:25 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:07:25.542335 | orchestrator | 2026-01-02 01:07:25 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:07:25.542407 | orchestrator | 2026-01-02 01:07:25 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:07:28.589100 | orchestrator | 2026-01-02 01:07:28 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:07:28.590376 | orchestrator | 2026-01-02 01:07:28 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:07:28.590419 | orchestrator | 2026-01-02 01:07:28 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:07:31.630475 | orchestrator | 2026-01-02 01:07:31 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:07:31.631633 | orchestrator | 2026-01-02 01:07:31 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:07:31.631921 | orchestrator | 2026-01-02 01:07:31 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:07:34.680356 | orchestrator | 2026-01-02 01:07:34 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:07:34.682665 | orchestrator | 2026-01-02 01:07:34 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:07:34.682815 | orchestrator | 2026-01-02 01:07:34 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:07:37.727639 | orchestrator | 2026-01-02 01:07:37 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:07:37.728957 | orchestrator | 2026-01-02 01:07:37 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:07:37.729014 | orchestrator | 2026-01-02 01:07:37 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:07:40.775827 | orchestrator | 2026-01-02 01:07:40 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:07:40.777600 | orchestrator | 2026-01-02 01:07:40 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:07:40.778184 | orchestrator | 2026-01-02 01:07:40 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:07:43.822990 | orchestrator | 2026-01-02 01:07:43 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:07:43.825038 | orchestrator | 2026-01-02 01:07:43 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:07:43.825068 | orchestrator | 2026-01-02 01:07:43 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:07:46.867345 | orchestrator | 2026-01-02 01:07:46 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:07:46.869365 | orchestrator | 2026-01-02 01:07:46 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:07:46.869450 | orchestrator | 2026-01-02 01:07:46 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:07:49.926398 | orchestrator | 2026-01-02 01:07:49 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:07:49.928245 | orchestrator | 2026-01-02 01:07:49 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:07:49.928302 | orchestrator | 2026-01-02 01:07:49 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:07:52.975737 | orchestrator | 2026-01-02 01:07:52 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:07:52.977782 | orchestrator | 2026-01-02 01:07:52 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:07:52.977836 | orchestrator | 2026-01-02 01:07:52 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:07:56.021377 | orchestrator | 2026-01-02 01:07:56 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:07:56.023557 | orchestrator | 2026-01-02 01:07:56 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:07:56.023721 | orchestrator | 2026-01-02 01:07:56 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:07:59.071622 | orchestrator | 2026-01-02 01:07:59 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:07:59.072915 | orchestrator | 2026-01-02 01:07:59 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:07:59.072953 | orchestrator | 2026-01-02 01:07:59 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:08:02.115989 | orchestrator | 2026-01-02 01:08:02 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:08:02.116808 | orchestrator | 2026-01-02 01:08:02 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:08:02.116863 | orchestrator | 2026-01-02 01:08:02 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:08:05.161765 | orchestrator | 2026-01-02 01:08:05 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:08:05.163298 | orchestrator | 2026-01-02 01:08:05 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:08:05.163446 | orchestrator | 2026-01-02 01:08:05 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:08:08.206276 | orchestrator | 2026-01-02 01:08:08 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:08:08.207760 | orchestrator | 2026-01-02 01:08:08 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:08:08.207814 | orchestrator | 2026-01-02 01:08:08 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:08:11.250342 | orchestrator | 2026-01-02 01:08:11 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:08:11.252330 | orchestrator | 2026-01-02 01:08:11 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:08:11.252389 | orchestrator | 2026-01-02 01:08:11 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:08:14.298314 | orchestrator | 2026-01-02 01:08:14 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:08:14.300262 | orchestrator | 2026-01-02 01:08:14 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:08:14.300308 | orchestrator | 2026-01-02 01:08:14 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:08:17.343520 | orchestrator | 2026-01-02 01:08:17 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:08:17.345212 | orchestrator | 2026-01-02 01:08:17 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:08:17.345250 | orchestrator | 2026-01-02 01:08:17 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:08:20.395164 | orchestrator | 2026-01-02 01:08:20 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:08:20.397020 | orchestrator | 2026-01-02 01:08:20 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:08:20.397054 | orchestrator | 2026-01-02 01:08:20 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:08:23.445020 | orchestrator | 2026-01-02 01:08:23 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:08:23.447517 | orchestrator | 2026-01-02 01:08:23 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:08:23.447567 | orchestrator | 2026-01-02 01:08:23 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:08:26.494008 | orchestrator | 2026-01-02 01:08:26 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:08:26.496220 | orchestrator | 2026-01-02 01:08:26 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:08:26.496323 | orchestrator | 2026-01-02 01:08:26 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:08:29.543371 | orchestrator | 2026-01-02 01:08:29 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:08:29.544993 | orchestrator | 2026-01-02 01:08:29 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:08:29.545030 | orchestrator | 2026-01-02 01:08:29 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:08:32.586193 | orchestrator | 2026-01-02 01:08:32 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:08:32.587464 | orchestrator | 2026-01-02 01:08:32 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:08:32.587498 | orchestrator | 2026-01-02 01:08:32 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:08:35.629917 | orchestrator | 2026-01-02 01:08:35 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:08:35.630575 | orchestrator | 2026-01-02 01:08:35 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:08:35.630708 | orchestrator | 2026-01-02 01:08:35 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:08:38.679480 | orchestrator | 2026-01-02 01:08:38 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:08:38.681247 | orchestrator | 2026-01-02 01:08:38 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:08:38.681281 | orchestrator | 2026-01-02 01:08:38 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:08:41.729740 | orchestrator | 2026-01-02 01:08:41 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:08:41.731712 | orchestrator | 2026-01-02 01:08:41 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:08:41.732028 | orchestrator | 2026-01-02 01:08:41 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:08:44.781233 | orchestrator | 2026-01-02 01:08:44 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:08:44.783301 | orchestrator | 2026-01-02 01:08:44 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:08:44.783379 | orchestrator | 2026-01-02 01:08:44 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:08:47.832824 | orchestrator | 2026-01-02 01:08:47 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:08:47.834182 | orchestrator | 2026-01-02 01:08:47 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:08:47.834246 | orchestrator | 2026-01-02 01:08:47 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:08:50.883588 | orchestrator | 2026-01-02 01:08:50 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:08:50.885366 | orchestrator | 2026-01-02 01:08:50 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:08:50.885417 | orchestrator | 2026-01-02 01:08:50 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:08:53.931954 | orchestrator | 2026-01-02 01:08:53 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:08:53.933523 | orchestrator | 2026-01-02 01:08:53 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:08:53.934113 | orchestrator | 2026-01-02 01:08:53 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:08:56.980654 | orchestrator | 2026-01-02 01:08:56 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:08:56.983121 | orchestrator | 2026-01-02 01:08:56 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:08:56.983218 | orchestrator | 2026-01-02 01:08:56 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:09:00.026422 | orchestrator | 2026-01-02 01:09:00 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:09:00.028229 | orchestrator | 2026-01-02 01:09:00 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:09:00.028484 | orchestrator | 2026-01-02 01:09:00 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:09:03.066894 | orchestrator | 2026-01-02 01:09:03 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:09:03.069301 | orchestrator | 2026-01-02 01:09:03 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:09:03.069347 | orchestrator | 2026-01-02 01:09:03 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:09:06.115423 | orchestrator | 2026-01-02 01:09:06 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:09:06.117589 | orchestrator | 2026-01-02 01:09:06 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:09:06.117693 | orchestrator | 2026-01-02 01:09:06 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:09:09.163750 | orchestrator | 2026-01-02 01:09:09 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:09:09.165993 | orchestrator | 2026-01-02 01:09:09 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:09:09.166110 | orchestrator | 2026-01-02 01:09:09 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:09:12.213265 | orchestrator | 2026-01-02 01:09:12 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:09:12.214386 | orchestrator | 2026-01-02 01:09:12 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:09:12.214440 | orchestrator | 2026-01-02 01:09:12 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:09:15.262578 | orchestrator | 2026-01-02 01:09:15 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:09:15.264261 | orchestrator | 2026-01-02 01:09:15 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:09:15.264285 | orchestrator | 2026-01-02 01:09:15 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:09:18.305728 | orchestrator | 2026-01-02 01:09:18 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:09:18.308072 | orchestrator | 2026-01-02 01:09:18 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:09:18.308449 | orchestrator | 2026-01-02 01:09:18 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:09:21.353025 | orchestrator | 2026-01-02 01:09:21 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:09:21.354418 | orchestrator | 2026-01-02 01:09:21 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:09:21.354461 | orchestrator | 2026-01-02 01:09:21 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:09:24.398893 | orchestrator | 2026-01-02 01:09:24 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:09:24.402246 | orchestrator | 2026-01-02 01:09:24 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:09:24.402320 | orchestrator | 2026-01-02 01:09:24 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:09:27.445158 | orchestrator | 2026-01-02 01:09:27 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:09:27.446416 | orchestrator | 2026-01-02 01:09:27 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:09:27.446468 | orchestrator | 2026-01-02 01:09:27 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:09:30.487361 | orchestrator | 2026-01-02 01:09:30 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:09:30.487801 | orchestrator | 2026-01-02 01:09:30 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:09:30.487818 | orchestrator | 2026-01-02 01:09:30 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:09:33.533795 | orchestrator | 2026-01-02 01:09:33 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:09:33.536342 | orchestrator | 2026-01-02 01:09:33 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:09:33.536368 | orchestrator | 2026-01-02 01:09:33 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:09:36.584990 | orchestrator | 2026-01-02 01:09:36 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:09:36.587381 | orchestrator | 2026-01-02 01:09:36 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:09:36.587488 | orchestrator | 2026-01-02 01:09:36 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:09:39.637231 | orchestrator | 2026-01-02 01:09:39 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:09:39.638906 | orchestrator | 2026-01-02 01:09:39 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:09:39.639132 | orchestrator | 2026-01-02 01:09:39 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:09:42.685585 | orchestrator | 2026-01-02 01:09:42 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:09:42.687085 | orchestrator | 2026-01-02 01:09:42 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:09:42.687134 | orchestrator | 2026-01-02 01:09:42 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:09:45.732531 | orchestrator | 2026-01-02 01:09:45 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:09:45.733693 | orchestrator | 2026-01-02 01:09:45 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:09:45.733718 | orchestrator | 2026-01-02 01:09:45 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:09:48.780453 | orchestrator | 2026-01-02 01:09:48 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:09:48.782067 | orchestrator | 2026-01-02 01:09:48 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:09:48.782108 | orchestrator | 2026-01-02 01:09:48 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:09:51.826402 | orchestrator | 2026-01-02 01:09:51 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:09:51.828009 | orchestrator | 2026-01-02 01:09:51 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:09:51.828089 | orchestrator | 2026-01-02 01:09:51 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:09:54.869600 | orchestrator | 2026-01-02 01:09:54 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:09:54.871223 | orchestrator | 2026-01-02 01:09:54 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:09:54.871353 | orchestrator | 2026-01-02 01:09:54 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:09:57.912740 | orchestrator | 2026-01-02 01:09:57 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:09:57.914546 | orchestrator | 2026-01-02 01:09:57 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:09:57.914699 | orchestrator | 2026-01-02 01:09:57 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:10:00.958526 | orchestrator | 2026-01-02 01:10:00 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:10:00.960905 | orchestrator | 2026-01-02 01:10:00 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:10:00.961028 | orchestrator | 2026-01-02 01:10:00 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:10:04.001638 | orchestrator | 2026-01-02 01:10:04 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:10:04.002484 | orchestrator | 2026-01-02 01:10:04 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:10:04.002534 | orchestrator | 2026-01-02 01:10:04 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:10:07.047593 | orchestrator | 2026-01-02 01:10:07 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:10:07.048006 | orchestrator | 2026-01-02 01:10:07 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:10:07.048032 | orchestrator | 2026-01-02 01:10:07 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:10:10.093361 | orchestrator | 2026-01-02 01:10:10 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:10:10.095032 | orchestrator | 2026-01-02 01:10:10 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:10:10.095244 | orchestrator | 2026-01-02 01:10:10 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:10:13.138448 | orchestrator | 2026-01-02 01:10:13 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:10:13.139337 | orchestrator | 2026-01-02 01:10:13 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:10:13.139401 | orchestrator | 2026-01-02 01:10:13 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:10:16.181641 | orchestrator | 2026-01-02 01:10:16 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:10:16.182620 | orchestrator | 2026-01-02 01:10:16 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:10:16.182933 | orchestrator | 2026-01-02 01:10:16 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:10:19.229767 | orchestrator | 2026-01-02 01:10:19 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:10:19.231285 | orchestrator | 2026-01-02 01:10:19 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:10:19.231323 | orchestrator | 2026-01-02 01:10:19 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:10:22.278170 | orchestrator | 2026-01-02 01:10:22 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:10:22.278980 | orchestrator | 2026-01-02 01:10:22 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:10:22.279016 | orchestrator | 2026-01-02 01:10:22 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:10:25.322302 | orchestrator | 2026-01-02 01:10:25 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:10:25.323843 | orchestrator | 2026-01-02 01:10:25 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:10:25.324001 | orchestrator | 2026-01-02 01:10:25 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:10:28.375966 | orchestrator | 2026-01-02 01:10:28 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:10:28.377223 | orchestrator | 2026-01-02 01:10:28 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:10:28.377248 | orchestrator | 2026-01-02 01:10:28 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:10:31.423180 | orchestrator | 2026-01-02 01:10:31 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:10:31.425238 | orchestrator | 2026-01-02 01:10:31 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:10:31.425294 | orchestrator | 2026-01-02 01:10:31 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:10:34.473170 | orchestrator | 2026-01-02 01:10:34 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:10:34.474496 | orchestrator | 2026-01-02 01:10:34 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:10:34.474533 | orchestrator | 2026-01-02 01:10:34 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:10:37.523407 | orchestrator | 2026-01-02 01:10:37 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:10:37.525925 | orchestrator | 2026-01-02 01:10:37 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:10:37.526141 | orchestrator | 2026-01-02 01:10:37 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:10:40.572869 | orchestrator | 2026-01-02 01:10:40 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:10:40.573976 | orchestrator | 2026-01-02 01:10:40 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:10:40.574216 | orchestrator | 2026-01-02 01:10:40 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:10:43.632305 | orchestrator | 2026-01-02 01:10:43 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:10:43.634719 | orchestrator | 2026-01-02 01:10:43 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:10:43.634752 | orchestrator | 2026-01-02 01:10:43 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:10:46.697878 | orchestrator | 2026-01-02 01:10:46 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:10:46.700113 | orchestrator | 2026-01-02 01:10:46 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:10:46.700140 | orchestrator | 2026-01-02 01:10:46 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:10:49.746776 | orchestrator | 2026-01-02 01:10:49 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:10:49.748709 | orchestrator | 2026-01-02 01:10:49 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:10:49.748747 | orchestrator | 2026-01-02 01:10:49 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:10:52.799812 | orchestrator | 2026-01-02 01:10:52 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:10:52.800926 | orchestrator | 2026-01-02 01:10:52 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:10:52.800988 | orchestrator | 2026-01-02 01:10:52 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:10:55.846555 | orchestrator | 2026-01-02 01:10:55 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:10:55.847347 | orchestrator | 2026-01-02 01:10:55 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:10:55.847390 | orchestrator | 2026-01-02 01:10:55 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:10:58.889629 | orchestrator | 2026-01-02 01:10:58 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:10:58.891482 | orchestrator | 2026-01-02 01:10:58 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:10:58.891512 | orchestrator | 2026-01-02 01:10:58 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:11:01.960168 | orchestrator | 2026-01-02 01:11:01 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:11:01.961476 | orchestrator | 2026-01-02 01:11:01 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:11:01.961509 | orchestrator | 2026-01-02 01:11:01 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:11:05.029912 | orchestrator | 2026-01-02 01:11:05 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:11:05.030920 | orchestrator | 2026-01-02 01:11:05 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:11:05.031001 | orchestrator | 2026-01-02 01:11:05 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:11:08.080430 | orchestrator | 2026-01-02 01:11:08 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:11:08.080718 | orchestrator | 2026-01-02 01:11:08 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:11:08.081235 | orchestrator | 2026-01-02 01:11:08 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:11:11.136021 | orchestrator | 2026-01-02 01:11:11 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:11:11.138100 | orchestrator | 2026-01-02 01:11:11 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:11:11.138158 | orchestrator | 2026-01-02 01:11:11 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:11:14.182281 | orchestrator | 2026-01-02 01:11:14 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:11:14.186495 | orchestrator | 2026-01-02 01:11:14 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:11:14.186566 | orchestrator | 2026-01-02 01:11:14 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:11:17.228751 | orchestrator | 2026-01-02 01:11:17 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:11:17.229822 | orchestrator | 2026-01-02 01:11:17 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:11:17.229861 | orchestrator | 2026-01-02 01:11:17 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:11:20.277760 | orchestrator | 2026-01-02 01:11:20 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:11:20.280059 | orchestrator | 2026-01-02 01:11:20 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:11:20.280145 | orchestrator | 2026-01-02 01:11:20 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:11:23.326816 | orchestrator | 2026-01-02 01:11:23 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:11:23.328629 | orchestrator | 2026-01-02 01:11:23 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:11:23.328795 | orchestrator | 2026-01-02 01:11:23 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:11:26.381834 | orchestrator | 2026-01-02 01:11:26 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:11:26.383647 | orchestrator | 2026-01-02 01:11:26 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:11:26.384177 | orchestrator | 2026-01-02 01:11:26 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:11:29.432936 | orchestrator | 2026-01-02 01:11:29 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:11:29.434390 | orchestrator | 2026-01-02 01:11:29 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:11:29.434423 | orchestrator | 2026-01-02 01:11:29 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:11:32.481617 | orchestrator | 2026-01-02 01:11:32 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:11:32.484178 | orchestrator | 2026-01-02 01:11:32 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:11:32.484229 | orchestrator | 2026-01-02 01:11:32 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:11:35.527904 | orchestrator | 2026-01-02 01:11:35 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:11:35.528770 | orchestrator | 2026-01-02 01:11:35 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:11:35.528933 | orchestrator | 2026-01-02 01:11:35 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:11:38.578179 | orchestrator | 2026-01-02 01:11:38 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:11:38.578457 | orchestrator | 2026-01-02 01:11:38 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:11:38.578565 | orchestrator | 2026-01-02 01:11:38 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:11:41.626172 | orchestrator | 2026-01-02 01:11:41 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:11:41.628359 | orchestrator | 2026-01-02 01:11:41 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:11:41.628417 | orchestrator | 2026-01-02 01:11:41 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:11:44.679766 | orchestrator | 2026-01-02 01:11:44 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:11:44.681220 | orchestrator | 2026-01-02 01:11:44 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:11:44.681276 | orchestrator | 2026-01-02 01:11:44 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:11:47.727536 | orchestrator | 2026-01-02 01:11:47 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:11:47.729860 | orchestrator | 2026-01-02 01:11:47 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:11:47.729920 | orchestrator | 2026-01-02 01:11:47 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:11:50.775674 | orchestrator | 2026-01-02 01:11:50 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:11:50.777322 | orchestrator | 2026-01-02 01:11:50 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:11:50.777390 | orchestrator | 2026-01-02 01:11:50 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:11:53.822567 | orchestrator | 2026-01-02 01:11:53 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:11:53.824063 | orchestrator | 2026-01-02 01:11:53 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:11:53.824106 | orchestrator | 2026-01-02 01:11:53 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:11:56.869866 | orchestrator | 2026-01-02 01:11:56 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:11:56.871227 | orchestrator | 2026-01-02 01:11:56 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:11:56.871275 | orchestrator | 2026-01-02 01:11:56 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:11:59.919013 | orchestrator | 2026-01-02 01:11:59 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:11:59.920896 | orchestrator | 2026-01-02 01:11:59 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:11:59.921058 | orchestrator | 2026-01-02 01:11:59 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:12:02.964686 | orchestrator | 2026-01-02 01:12:02 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:12:02.965785 | orchestrator | 2026-01-02 01:12:02 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:12:02.965821 | orchestrator | 2026-01-02 01:12:02 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:12:06.010267 | orchestrator | 2026-01-02 01:12:06 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:12:06.011741 | orchestrator | 2026-01-02 01:12:06 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:12:06.011823 | orchestrator | 2026-01-02 01:12:06 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:12:09.050101 | orchestrator | 2026-01-02 01:12:09 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:12:09.051002 | orchestrator | 2026-01-02 01:12:09 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:12:09.051026 | orchestrator | 2026-01-02 01:12:09 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:12:12.091186 | orchestrator | 2026-01-02 01:12:12 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:12:12.091482 | orchestrator | 2026-01-02 01:12:12 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:12:12.092047 | orchestrator | 2026-01-02 01:12:12 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:12:15.135785 | orchestrator | 2026-01-02 01:12:15 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:12:15.137813 | orchestrator | 2026-01-02 01:12:15 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:12:15.137850 | orchestrator | 2026-01-02 01:12:15 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:12:18.177131 | orchestrator | 2026-01-02 01:12:18 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:12:18.179307 | orchestrator | 2026-01-02 01:12:18 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:12:18.179361 | orchestrator | 2026-01-02 01:12:18 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:12:21.229897 | orchestrator | 2026-01-02 01:12:21 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:12:21.231161 | orchestrator | 2026-01-02 01:12:21 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:12:21.231190 | orchestrator | 2026-01-02 01:12:21 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:12:24.281386 | orchestrator | 2026-01-02 01:12:24 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:12:24.283485 | orchestrator | 2026-01-02 01:12:24 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:12:24.283532 | orchestrator | 2026-01-02 01:12:24 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:12:27.326548 | orchestrator | 2026-01-02 01:12:27 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:12:27.330962 | orchestrator | 2026-01-02 01:12:27 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:12:27.331115 | orchestrator | 2026-01-02 01:12:27 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:12:30.378215 | orchestrator | 2026-01-02 01:12:30 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:12:30.379948 | orchestrator | 2026-01-02 01:12:30 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:12:30.379963 | orchestrator | 2026-01-02 01:12:30 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:12:33.426536 | orchestrator | 2026-01-02 01:12:33 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:12:33.428128 | orchestrator | 2026-01-02 01:12:33 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:12:33.428397 | orchestrator | 2026-01-02 01:12:33 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:12:36.478427 | orchestrator | 2026-01-02 01:12:36 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:12:36.479962 | orchestrator | 2026-01-02 01:12:36 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:12:36.480003 | orchestrator | 2026-01-02 01:12:36 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:12:39.525525 | orchestrator | 2026-01-02 01:12:39 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:12:39.527608 | orchestrator | 2026-01-02 01:12:39 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:12:39.527646 | orchestrator | 2026-01-02 01:12:39 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:12:42.569445 | orchestrator | 2026-01-02 01:12:42 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:12:42.572394 | orchestrator | 2026-01-02 01:12:42 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:12:42.572455 | orchestrator | 2026-01-02 01:12:42 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:12:45.624528 | orchestrator | 2026-01-02 01:12:45 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:12:45.626482 | orchestrator | 2026-01-02 01:12:45 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:12:45.626562 | orchestrator | 2026-01-02 01:12:45 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:12:48.670597 | orchestrator | 2026-01-02 01:12:48 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:12:48.672199 | orchestrator | 2026-01-02 01:12:48 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:12:48.672262 | orchestrator | 2026-01-02 01:12:48 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:12:51.715355 | orchestrator | 2026-01-02 01:12:51 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:12:51.717298 | orchestrator | 2026-01-02 01:12:51 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:12:51.717327 | orchestrator | 2026-01-02 01:12:51 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:12:54.760346 | orchestrator | 2026-01-02 01:12:54 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:12:54.763043 | orchestrator | 2026-01-02 01:12:54 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:12:54.763108 | orchestrator | 2026-01-02 01:12:54 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:12:57.812439 | orchestrator | 2026-01-02 01:12:57 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:12:57.813958 | orchestrator | 2026-01-02 01:12:57 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:12:57.814082 | orchestrator | 2026-01-02 01:12:57 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:13:00.860101 | orchestrator | 2026-01-02 01:13:00 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:13:00.861562 | orchestrator | 2026-01-02 01:13:00 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:13:00.861834 | orchestrator | 2026-01-02 01:13:00 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:13:03.906528 | orchestrator | 2026-01-02 01:13:03 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:13:03.908084 | orchestrator | 2026-01-02 01:13:03 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:13:03.908219 | orchestrator | 2026-01-02 01:13:03 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:13:06.954215 | orchestrator | 2026-01-02 01:13:06 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:13:06.955583 | orchestrator | 2026-01-02 01:13:06 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:13:06.955726 | orchestrator | 2026-01-02 01:13:06 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:13:09.999433 | orchestrator | 2026-01-02 01:13:09 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:13:10.000342 | orchestrator | 2026-01-02 01:13:10 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:13:10.000382 | orchestrator | 2026-01-02 01:13:10 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:13:13.045828 | orchestrator | 2026-01-02 01:13:13 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:13:13.046881 | orchestrator | 2026-01-02 01:13:13 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:13:13.047230 | orchestrator | 2026-01-02 01:13:13 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:13:16.091434 | orchestrator | 2026-01-02 01:13:16 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:13:16.092816 | orchestrator | 2026-01-02 01:13:16 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:13:16.092851 | orchestrator | 2026-01-02 01:13:16 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:13:19.129686 | orchestrator | 2026-01-02 01:13:19 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:13:19.131438 | orchestrator | 2026-01-02 01:13:19 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:13:19.131788 | orchestrator | 2026-01-02 01:13:19 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:13:22.170985 | orchestrator | 2026-01-02 01:13:22 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:13:22.171663 | orchestrator | 2026-01-02 01:13:22 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:13:22.171744 | orchestrator | 2026-01-02 01:13:22 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:13:25.221193 | orchestrator | 2026-01-02 01:13:25 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:13:25.223412 | orchestrator | 2026-01-02 01:13:25 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:13:25.223547 | orchestrator | 2026-01-02 01:13:25 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:13:28.270444 | orchestrator | 2026-01-02 01:13:28 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:13:28.271998 | orchestrator | 2026-01-02 01:13:28 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:13:28.272125 | orchestrator | 2026-01-02 01:13:28 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:13:31.320372 | orchestrator | 2026-01-02 01:13:31 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:13:31.321822 | orchestrator | 2026-01-02 01:13:31 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:13:31.321867 | orchestrator | 2026-01-02 01:13:31 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:13:34.373162 | orchestrator | 2026-01-02 01:13:34 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:13:34.376111 | orchestrator | 2026-01-02 01:13:34 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:13:34.376155 | orchestrator | 2026-01-02 01:13:34 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:13:37.424422 | orchestrator | 2026-01-02 01:13:37 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:13:37.426608 | orchestrator | 2026-01-02 01:13:37 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:13:37.426665 | orchestrator | 2026-01-02 01:13:37 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:13:40.480216 | orchestrator | 2026-01-02 01:13:40 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:13:40.481687 | orchestrator | 2026-01-02 01:13:40 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:13:40.481787 | orchestrator | 2026-01-02 01:13:40 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:13:43.528556 | orchestrator | 2026-01-02 01:13:43 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:13:43.530145 | orchestrator | 2026-01-02 01:13:43 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:13:43.530201 | orchestrator | 2026-01-02 01:13:43 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:13:46.572300 | orchestrator | 2026-01-02 01:13:46 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:13:46.573413 | orchestrator | 2026-01-02 01:13:46 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:13:46.573575 | orchestrator | 2026-01-02 01:13:46 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:13:49.625238 | orchestrator | 2026-01-02 01:13:49 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:13:49.626832 | orchestrator | 2026-01-02 01:13:49 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:13:49.626868 | orchestrator | 2026-01-02 01:13:49 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:13:52.674870 | orchestrator | 2026-01-02 01:13:52 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:13:52.677287 | orchestrator | 2026-01-02 01:13:52 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:13:52.677362 | orchestrator | 2026-01-02 01:13:52 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:13:55.724486 | orchestrator | 2026-01-02 01:13:55 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:13:55.726576 | orchestrator | 2026-01-02 01:13:55 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:13:55.726642 | orchestrator | 2026-01-02 01:13:55 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:13:58.773338 | orchestrator | 2026-01-02 01:13:58 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:13:58.775735 | orchestrator | 2026-01-02 01:13:58 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:13:58.775816 | orchestrator | 2026-01-02 01:13:58 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:14:01.818096 | orchestrator | 2026-01-02 01:14:01 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:14:01.822631 | orchestrator | 2026-01-02 01:14:01 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:14:01.822705 | orchestrator | 2026-01-02 01:14:01 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:14:04.873446 | orchestrator | 2026-01-02 01:14:04 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:14:04.876649 | orchestrator | 2026-01-02 01:14:04 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:14:04.876727 | orchestrator | 2026-01-02 01:14:04 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:14:07.918256 | orchestrator | 2026-01-02 01:14:07 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:14:07.921091 | orchestrator | 2026-01-02 01:14:07 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:14:07.921140 | orchestrator | 2026-01-02 01:14:07 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:14:10.970350 | orchestrator | 2026-01-02 01:14:10 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:14:10.973957 | orchestrator | 2026-01-02 01:14:10 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:14:10.974085 | orchestrator | 2026-01-02 01:14:10 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:14:14.024576 | orchestrator | 2026-01-02 01:14:14 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:14:14.026205 | orchestrator | 2026-01-02 01:14:14 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:14:14.026373 | orchestrator | 2026-01-02 01:14:14 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:14:17.072931 | orchestrator | 2026-01-02 01:14:17 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:14:17.073348 | orchestrator | 2026-01-02 01:14:17 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:14:17.073395 | orchestrator | 2026-01-02 01:14:17 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:14:20.119029 | orchestrator | 2026-01-02 01:14:20 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:14:20.120398 | orchestrator | 2026-01-02 01:14:20 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:14:20.121022 | orchestrator | 2026-01-02 01:14:20 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:14:23.165454 | orchestrator | 2026-01-02 01:14:23 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:14:23.166530 | orchestrator | 2026-01-02 01:14:23 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:14:23.166563 | orchestrator | 2026-01-02 01:14:23 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:14:26.213956 | orchestrator | 2026-01-02 01:14:26 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:14:26.215933 | orchestrator | 2026-01-02 01:14:26 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:14:26.215980 | orchestrator | 2026-01-02 01:14:26 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:14:29.264339 | orchestrator | 2026-01-02 01:14:29 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:14:29.267159 | orchestrator | 2026-01-02 01:14:29 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:14:29.267226 | orchestrator | 2026-01-02 01:14:29 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:14:32.315237 | orchestrator | 2026-01-02 01:14:32 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:14:32.318003 | orchestrator | 2026-01-02 01:14:32 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:14:32.318218 | orchestrator | 2026-01-02 01:14:32 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:14:35.367614 | orchestrator | 2026-01-02 01:14:35 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:14:35.368609 | orchestrator | 2026-01-02 01:14:35 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:14:35.368637 | orchestrator | 2026-01-02 01:14:35 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:14:38.414786 | orchestrator | 2026-01-02 01:14:38 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:14:38.416577 | orchestrator | 2026-01-02 01:14:38 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:14:38.416759 | orchestrator | 2026-01-02 01:14:38 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:14:41.465084 | orchestrator | 2026-01-02 01:14:41 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:14:41.466338 | orchestrator | 2026-01-02 01:14:41 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:14:41.466374 | orchestrator | 2026-01-02 01:14:41 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:14:44.515887 | orchestrator | 2026-01-02 01:14:44 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:14:44.517583 | orchestrator | 2026-01-02 01:14:44 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:14:44.517644 | orchestrator | 2026-01-02 01:14:44 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:14:47.560296 | orchestrator | 2026-01-02 01:14:47 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:14:47.562160 | orchestrator | 2026-01-02 01:14:47 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:14:47.562573 | orchestrator | 2026-01-02 01:14:47 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:14:50.609334 | orchestrator | 2026-01-02 01:14:50 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:14:50.610109 | orchestrator | 2026-01-02 01:14:50 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:14:50.610200 | orchestrator | 2026-01-02 01:14:50 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:14:53.657919 | orchestrator | 2026-01-02 01:14:53 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:14:53.658142 | orchestrator | 2026-01-02 01:14:53 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:14:53.658413 | orchestrator | 2026-01-02 01:14:53 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:14:56.702990 | orchestrator | 2026-01-02 01:14:56 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:14:56.705170 | orchestrator | 2026-01-02 01:14:56 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:14:56.705232 | orchestrator | 2026-01-02 01:14:56 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:14:59.751294 | orchestrator | 2026-01-02 01:14:59 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:14:59.752339 | orchestrator | 2026-01-02 01:14:59 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:14:59.752373 | orchestrator | 2026-01-02 01:14:59 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:15:02.798289 | orchestrator | 2026-01-02 01:15:02 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:15:02.799925 | orchestrator | 2026-01-02 01:15:02 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:15:02.799962 | orchestrator | 2026-01-02 01:15:02 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:15:05.852538 | orchestrator | 2026-01-02 01:15:05 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:15:05.853713 | orchestrator | 2026-01-02 01:15:05 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:15:05.853757 | orchestrator | 2026-01-02 01:15:05 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:15:08.897136 | orchestrator | 2026-01-02 01:15:08 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:15:08.897796 | orchestrator | 2026-01-02 01:15:08 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:15:08.897865 | orchestrator | 2026-01-02 01:15:08 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:15:11.955011 | orchestrator | 2026-01-02 01:15:11 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:15:11.955123 | orchestrator | 2026-01-02 01:15:11 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:15:11.955138 | orchestrator | 2026-01-02 01:15:11 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:15:14.996915 | orchestrator | 2026-01-02 01:15:14 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:15:14.998224 | orchestrator | 2026-01-02 01:15:14 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:15:14.998281 | orchestrator | 2026-01-02 01:15:14 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:15:18.039948 | orchestrator | 2026-01-02 01:15:18 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:15:18.043147 | orchestrator | 2026-01-02 01:15:18 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:15:18.043206 | orchestrator | 2026-01-02 01:15:18 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:15:21.095015 | orchestrator | 2026-01-02 01:15:21 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:15:21.096848 | orchestrator | 2026-01-02 01:15:21 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:15:21.096904 | orchestrator | 2026-01-02 01:15:21 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:15:24.142300 | orchestrator | 2026-01-02 01:15:24 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:15:24.143259 | orchestrator | 2026-01-02 01:15:24 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:15:24.143299 | orchestrator | 2026-01-02 01:15:24 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:15:27.193617 | orchestrator | 2026-01-02 01:15:27 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:15:27.194257 | orchestrator | 2026-01-02 01:15:27 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:15:27.194296 | orchestrator | 2026-01-02 01:15:27 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:15:30.242781 | orchestrator | 2026-01-02 01:15:30 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:15:30.245285 | orchestrator | 2026-01-02 01:15:30 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:15:30.245874 | orchestrator | 2026-01-02 01:15:30 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:15:33.295565 | orchestrator | 2026-01-02 01:15:33 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:15:33.297094 | orchestrator | 2026-01-02 01:15:33 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:15:33.297153 | orchestrator | 2026-01-02 01:15:33 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:15:36.348324 | orchestrator | 2026-01-02 01:15:36 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:15:36.351495 | orchestrator | 2026-01-02 01:15:36 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:15:36.351634 | orchestrator | 2026-01-02 01:15:36 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:15:39.402402 | orchestrator | 2026-01-02 01:15:39 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:15:39.404312 | orchestrator | 2026-01-02 01:15:39 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:15:39.404389 | orchestrator | 2026-01-02 01:15:39 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:15:42.456924 | orchestrator | 2026-01-02 01:15:42 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:15:42.458070 | orchestrator | 2026-01-02 01:15:42 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:15:42.458117 | orchestrator | 2026-01-02 01:15:42 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:15:45.507170 | orchestrator | 2026-01-02 01:15:45 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:15:45.507393 | orchestrator | 2026-01-02 01:15:45 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:15:45.507416 | orchestrator | 2026-01-02 01:15:45 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:15:48.560781 | orchestrator | 2026-01-02 01:15:48 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:15:48.561730 | orchestrator | 2026-01-02 01:15:48 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:15:48.561770 | orchestrator | 2026-01-02 01:15:48 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:15:51.618391 | orchestrator | 2026-01-02 01:15:51 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:15:51.620271 | orchestrator | 2026-01-02 01:15:51 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:15:51.620546 | orchestrator | 2026-01-02 01:15:51 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:15:54.673226 | orchestrator | 2026-01-02 01:15:54 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:15:54.674409 | orchestrator | 2026-01-02 01:15:54 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:15:54.674507 | orchestrator | 2026-01-02 01:15:54 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:15:57.738295 | orchestrator | 2026-01-02 01:15:57 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:15:57.739592 | orchestrator | 2026-01-02 01:15:57 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:15:57.739998 | orchestrator | 2026-01-02 01:15:57 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:16:00.791931 | orchestrator | 2026-01-02 01:16:00 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:16:00.792851 | orchestrator | 2026-01-02 01:16:00 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:16:00.793079 | orchestrator | 2026-01-02 01:16:00 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:16:03.866790 | orchestrator | 2026-01-02 01:16:03 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:16:03.867668 | orchestrator | 2026-01-02 01:16:03 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:16:03.867712 | orchestrator | 2026-01-02 01:16:03 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:16:06.914882 | orchestrator | 2026-01-02 01:16:06 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:16:06.914980 | orchestrator | 2026-01-02 01:16:06 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:16:06.914994 | orchestrator | 2026-01-02 01:16:06 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:16:09.960245 | orchestrator | 2026-01-02 01:16:09 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:16:09.961583 | orchestrator | 2026-01-02 01:16:09 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:16:09.962061 | orchestrator | 2026-01-02 01:16:09 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:16:13.021263 | orchestrator | 2026-01-02 01:16:13 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:16:13.024320 | orchestrator | 2026-01-02 01:16:13 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:16:13.024407 | orchestrator | 2026-01-02 01:16:13 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:16:16.077090 | orchestrator | 2026-01-02 01:16:16 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:16:16.080310 | orchestrator | 2026-01-02 01:16:16 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:16:16.080714 | orchestrator | 2026-01-02 01:16:16 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:16:19.122922 | orchestrator | 2026-01-02 01:16:19 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:16:19.125118 | orchestrator | 2026-01-02 01:16:19 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:16:19.125210 | orchestrator | 2026-01-02 01:16:19 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:16:22.175031 | orchestrator | 2026-01-02 01:16:22 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:16:22.177250 | orchestrator | 2026-01-02 01:16:22 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:16:22.177287 | orchestrator | 2026-01-02 01:16:22 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:16:25.231332 | orchestrator | 2026-01-02 01:16:25 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:16:25.233167 | orchestrator | 2026-01-02 01:16:25 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:16:25.233253 | orchestrator | 2026-01-02 01:16:25 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:16:28.285289 | orchestrator | 2026-01-02 01:16:28 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:16:28.287079 | orchestrator | 2026-01-02 01:16:28 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:16:28.287135 | orchestrator | 2026-01-02 01:16:28 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:16:31.341063 | orchestrator | 2026-01-02 01:16:31 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:16:31.343057 | orchestrator | 2026-01-02 01:16:31 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:16:31.343116 | orchestrator | 2026-01-02 01:16:31 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:16:34.390994 | orchestrator | 2026-01-02 01:16:34 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:16:34.392994 | orchestrator | 2026-01-02 01:16:34 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:16:34.393036 | orchestrator | 2026-01-02 01:16:34 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:16:37.439087 | orchestrator | 2026-01-02 01:16:37 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:16:37.442215 | orchestrator | 2026-01-02 01:16:37 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:16:37.442396 | orchestrator | 2026-01-02 01:16:37 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:16:40.493476 | orchestrator | 2026-01-02 01:16:40 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:16:40.496195 | orchestrator | 2026-01-02 01:16:40 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:16:40.496252 | orchestrator | 2026-01-02 01:16:40 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:16:43.544505 | orchestrator | 2026-01-02 01:16:43 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:16:43.546136 | orchestrator | 2026-01-02 01:16:43 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:16:43.546202 | orchestrator | 2026-01-02 01:16:43 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:16:46.595395 | orchestrator | 2026-01-02 01:16:46 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:16:46.596508 | orchestrator | 2026-01-02 01:16:46 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:16:46.596560 | orchestrator | 2026-01-02 01:16:46 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:16:49.642245 | orchestrator | 2026-01-02 01:16:49 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:16:49.644052 | orchestrator | 2026-01-02 01:16:49 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:16:49.644111 | orchestrator | 2026-01-02 01:16:49 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:16:52.693661 | orchestrator | 2026-01-02 01:16:52 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:16:52.694714 | orchestrator | 2026-01-02 01:16:52 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:16:52.694816 | orchestrator | 2026-01-02 01:16:52 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:16:55.744736 | orchestrator | 2026-01-02 01:16:55 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:16:55.746762 | orchestrator | 2026-01-02 01:16:55 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:16:55.746842 | orchestrator | 2026-01-02 01:16:55 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:16:58.795327 | orchestrator | 2026-01-02 01:16:58 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:16:58.797322 | orchestrator | 2026-01-02 01:16:58 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:16:58.797407 | orchestrator | 2026-01-02 01:16:58 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:17:01.835474 | orchestrator | 2026-01-02 01:17:01 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:17:01.836349 | orchestrator | 2026-01-02 01:17:01 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:17:01.836485 | orchestrator | 2026-01-02 01:17:01 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:17:04.885091 | orchestrator | 2026-01-02 01:17:04 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:17:04.886638 | orchestrator | 2026-01-02 01:17:04 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:17:04.886780 | orchestrator | 2026-01-02 01:17:04 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:17:07.934610 | orchestrator | 2026-01-02 01:17:07 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:17:07.938205 | orchestrator | 2026-01-02 01:17:07 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:17:07.938288 | orchestrator | 2026-01-02 01:17:07 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:17:10.986480 | orchestrator | 2026-01-02 01:17:10 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:17:10.989241 | orchestrator | 2026-01-02 01:17:10 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:17:10.989337 | orchestrator | 2026-01-02 01:17:10 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:17:14.045918 | orchestrator | 2026-01-02 01:17:14 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:17:14.048024 | orchestrator | 2026-01-02 01:17:14 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:17:14.048072 | orchestrator | 2026-01-02 01:17:14 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:17:17.096160 | orchestrator | 2026-01-02 01:17:17 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:17:17.098566 | orchestrator | 2026-01-02 01:17:17 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:17:17.098603 | orchestrator | 2026-01-02 01:17:17 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:17:20.142893 | orchestrator | 2026-01-02 01:17:20 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:17:20.143430 | orchestrator | 2026-01-02 01:17:20 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:17:20.143464 | orchestrator | 2026-01-02 01:17:20 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:17:23.191104 | orchestrator | 2026-01-02 01:17:23 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:17:23.192612 | orchestrator | 2026-01-02 01:17:23 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:17:23.192650 | orchestrator | 2026-01-02 01:17:23 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:17:26.245281 | orchestrator | 2026-01-02 01:17:26 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:17:26.247402 | orchestrator | 2026-01-02 01:17:26 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:17:26.247481 | orchestrator | 2026-01-02 01:17:26 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:17:29.293689 | orchestrator | 2026-01-02 01:17:29 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:17:29.295409 | orchestrator | 2026-01-02 01:17:29 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:17:29.295503 | orchestrator | 2026-01-02 01:17:29 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:17:32.342864 | orchestrator | 2026-01-02 01:17:32 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:17:32.344642 | orchestrator | 2026-01-02 01:17:32 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:17:32.344696 | orchestrator | 2026-01-02 01:17:32 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:17:35.390890 | orchestrator | 2026-01-02 01:17:35 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:17:35.392366 | orchestrator | 2026-01-02 01:17:35 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:17:35.392455 | orchestrator | 2026-01-02 01:17:35 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:17:38.437126 | orchestrator | 2026-01-02 01:17:38 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:17:38.439082 | orchestrator | 2026-01-02 01:17:38 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:17:38.439135 | orchestrator | 2026-01-02 01:17:38 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:17:41.491951 | orchestrator | 2026-01-02 01:17:41 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:17:41.493511 | orchestrator | 2026-01-02 01:17:41 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:17:41.493558 | orchestrator | 2026-01-02 01:17:41 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:17:44.539074 | orchestrator | 2026-01-02 01:17:44 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:17:44.541654 | orchestrator | 2026-01-02 01:17:44 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:17:44.541865 | orchestrator | 2026-01-02 01:17:44 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:17:47.586333 | orchestrator | 2026-01-02 01:17:47 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:17:47.587111 | orchestrator | 2026-01-02 01:17:47 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:17:47.587144 | orchestrator | 2026-01-02 01:17:47 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:17:50.633923 | orchestrator | 2026-01-02 01:17:50 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:17:50.635601 | orchestrator | 2026-01-02 01:17:50 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:17:50.635758 | orchestrator | 2026-01-02 01:17:50 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:17:53.683714 | orchestrator | 2026-01-02 01:17:53 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:17:53.684609 | orchestrator | 2026-01-02 01:17:53 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:17:53.684689 | orchestrator | 2026-01-02 01:17:53 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:17:56.734124 | orchestrator | 2026-01-02 01:17:56 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:17:56.736884 | orchestrator | 2026-01-02 01:17:56 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:17:56.737145 | orchestrator | 2026-01-02 01:17:56 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:17:59.780939 | orchestrator | 2026-01-02 01:17:59 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:17:59.782408 | orchestrator | 2026-01-02 01:17:59 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:17:59.782432 | orchestrator | 2026-01-02 01:17:59 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:18:02.830094 | orchestrator | 2026-01-02 01:18:02 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:18:02.831860 | orchestrator | 2026-01-02 01:18:02 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:18:02.831910 | orchestrator | 2026-01-02 01:18:02 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:18:05.875869 | orchestrator | 2026-01-02 01:18:05 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:18:05.877631 | orchestrator | 2026-01-02 01:18:05 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:18:05.877677 | orchestrator | 2026-01-02 01:18:05 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:18:08.914445 | orchestrator | 2026-01-02 01:18:08 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:18:08.915004 | orchestrator | 2026-01-02 01:18:08 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:18:08.915022 | orchestrator | 2026-01-02 01:18:08 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:18:11.966153 | orchestrator | 2026-01-02 01:18:11 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:18:11.967587 | orchestrator | 2026-01-02 01:18:11 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:18:11.967625 | orchestrator | 2026-01-02 01:18:11 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:18:15.013160 | orchestrator | 2026-01-02 01:18:15 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:18:15.015161 | orchestrator | 2026-01-02 01:18:15 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:18:15.015220 | orchestrator | 2026-01-02 01:18:15 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:18:18.058902 | orchestrator | 2026-01-02 01:18:18 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:18:18.061178 | orchestrator | 2026-01-02 01:18:18 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:18:18.061312 | orchestrator | 2026-01-02 01:18:18 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:18:21.108089 | orchestrator | 2026-01-02 01:18:21 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:18:21.109550 | orchestrator | 2026-01-02 01:18:21 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:18:21.109584 | orchestrator | 2026-01-02 01:18:21 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:18:24.150066 | orchestrator | 2026-01-02 01:18:24 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:18:24.151694 | orchestrator | 2026-01-02 01:18:24 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:18:24.151720 | orchestrator | 2026-01-02 01:18:24 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:18:27.195973 | orchestrator | 2026-01-02 01:18:27 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:18:27.197494 | orchestrator | 2026-01-02 01:18:27 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:18:27.197595 | orchestrator | 2026-01-02 01:18:27 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:18:30.238122 | orchestrator | 2026-01-02 01:18:30 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:18:30.238834 | orchestrator | 2026-01-02 01:18:30 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:18:30.238857 | orchestrator | 2026-01-02 01:18:30 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:18:33.281751 | orchestrator | 2026-01-02 01:18:33 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:18:33.283902 | orchestrator | 2026-01-02 01:18:33 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:18:33.284047 | orchestrator | 2026-01-02 01:18:33 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:18:36.333547 | orchestrator | 2026-01-02 01:18:36 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:18:36.335113 | orchestrator | 2026-01-02 01:18:36 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:18:36.335160 | orchestrator | 2026-01-02 01:18:36 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:18:39.377397 | orchestrator | 2026-01-02 01:18:39 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:18:39.378595 | orchestrator | 2026-01-02 01:18:39 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:18:39.378619 | orchestrator | 2026-01-02 01:18:39 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:18:42.427406 | orchestrator | 2026-01-02 01:18:42 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:18:42.429467 | orchestrator | 2026-01-02 01:18:42 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:18:42.429501 | orchestrator | 2026-01-02 01:18:42 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:18:45.478709 | orchestrator | 2026-01-02 01:18:45 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:18:45.480908 | orchestrator | 2026-01-02 01:18:45 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:18:45.481292 | orchestrator | 2026-01-02 01:18:45 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:18:48.535264 | orchestrator | 2026-01-02 01:18:48 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:18:48.536901 | orchestrator | 2026-01-02 01:18:48 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:18:48.537034 | orchestrator | 2026-01-02 01:18:48 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:18:51.587375 | orchestrator | 2026-01-02 01:18:51 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:18:51.590143 | orchestrator | 2026-01-02 01:18:51 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:18:51.590207 | orchestrator | 2026-01-02 01:18:51 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:18:54.638671 | orchestrator | 2026-01-02 01:18:54 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:18:54.639904 | orchestrator | 2026-01-02 01:18:54 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:18:54.639946 | orchestrator | 2026-01-02 01:18:54 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:18:57.692153 | orchestrator | 2026-01-02 01:18:57 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:18:57.694258 | orchestrator | 2026-01-02 01:18:57 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:18:57.694359 | orchestrator | 2026-01-02 01:18:57 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:19:00.741148 | orchestrator | 2026-01-02 01:19:00 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:19:00.742725 | orchestrator | 2026-01-02 01:19:00 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:19:00.742872 | orchestrator | 2026-01-02 01:19:00 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:19:03.792286 | orchestrator | 2026-01-02 01:19:03 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:19:03.793534 | orchestrator | 2026-01-02 01:19:03 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:19:03.793602 | orchestrator | 2026-01-02 01:19:03 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:19:06.837798 | orchestrator | 2026-01-02 01:19:06 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:19:06.838531 | orchestrator | 2026-01-02 01:19:06 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:19:06.838574 | orchestrator | 2026-01-02 01:19:06 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:19:09.883438 | orchestrator | 2026-01-02 01:19:09 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:19:09.884950 | orchestrator | 2026-01-02 01:19:09 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:19:09.885000 | orchestrator | 2026-01-02 01:19:09 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:19:12.930827 | orchestrator | 2026-01-02 01:19:12 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:19:12.932653 | orchestrator | 2026-01-02 01:19:12 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:19:12.932695 | orchestrator | 2026-01-02 01:19:12 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:19:15.979478 | orchestrator | 2026-01-02 01:19:15 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:19:15.981560 | orchestrator | 2026-01-02 01:19:15 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:19:15.981624 | orchestrator | 2026-01-02 01:19:15 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:19:19.025849 | orchestrator | 2026-01-02 01:19:19 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:19:19.027104 | orchestrator | 2026-01-02 01:19:19 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:19:19.027149 | orchestrator | 2026-01-02 01:19:19 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:19:22.078086 | orchestrator | 2026-01-02 01:19:22 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:19:22.079139 | orchestrator | 2026-01-02 01:19:22 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:19:22.080112 | orchestrator | 2026-01-02 01:19:22 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:19:25.123986 | orchestrator | 2026-01-02 01:19:25 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:19:25.125239 | orchestrator | 2026-01-02 01:19:25 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:19:25.125338 | orchestrator | 2026-01-02 01:19:25 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:19:28.177875 | orchestrator | 2026-01-02 01:19:28 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:19:28.180066 | orchestrator | 2026-01-02 01:19:28 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:19:28.180405 | orchestrator | 2026-01-02 01:19:28 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:19:31.226300 | orchestrator | 2026-01-02 01:19:31 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:19:31.228160 | orchestrator | 2026-01-02 01:19:31 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:19:31.228185 | orchestrator | 2026-01-02 01:19:31 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:19:34.277392 | orchestrator | 2026-01-02 01:19:34 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:19:34.278640 | orchestrator | 2026-01-02 01:19:34 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:19:34.278894 | orchestrator | 2026-01-02 01:19:34 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:19:37.326129 | orchestrator | 2026-01-02 01:19:37 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:19:37.327240 | orchestrator | 2026-01-02 01:19:37 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:19:37.327361 | orchestrator | 2026-01-02 01:19:37 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:19:40.374490 | orchestrator | 2026-01-02 01:19:40 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:19:40.376563 | orchestrator | 2026-01-02 01:19:40 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:19:40.376627 | orchestrator | 2026-01-02 01:19:40 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:19:43.423487 | orchestrator | 2026-01-02 01:19:43 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:19:43.424829 | orchestrator | 2026-01-02 01:19:43 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:19:43.424869 | orchestrator | 2026-01-02 01:19:43 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:19:46.470514 | orchestrator | 2026-01-02 01:19:46 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:19:46.472541 | orchestrator | 2026-01-02 01:19:46 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:19:46.472839 | orchestrator | 2026-01-02 01:19:46 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:19:49.511822 | orchestrator | 2026-01-02 01:19:49 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:19:49.512504 | orchestrator | 2026-01-02 01:19:49 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:19:49.512530 | orchestrator | 2026-01-02 01:19:49 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:19:52.556302 | orchestrator | 2026-01-02 01:19:52 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:19:52.557678 | orchestrator | 2026-01-02 01:19:52 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:19:52.557733 | orchestrator | 2026-01-02 01:19:52 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:19:55.602422 | orchestrator | 2026-01-02 01:19:55 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:19:55.604076 | orchestrator | 2026-01-02 01:19:55 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:19:55.604149 | orchestrator | 2026-01-02 01:19:55 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:19:58.651100 | orchestrator | 2026-01-02 01:19:58 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:19:58.652446 | orchestrator | 2026-01-02 01:19:58 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:19:58.652506 | orchestrator | 2026-01-02 01:19:58 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:20:01.694471 | orchestrator | 2026-01-02 01:20:01 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:20:01.696336 | orchestrator | 2026-01-02 01:20:01 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:20:01.696411 | orchestrator | 2026-01-02 01:20:01 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:20:04.738353 | orchestrator | 2026-01-02 01:20:04 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:20:04.740244 | orchestrator | 2026-01-02 01:20:04 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:20:04.740422 | orchestrator | 2026-01-02 01:20:04 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:20:07.783561 | orchestrator | 2026-01-02 01:20:07 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:20:07.784832 | orchestrator | 2026-01-02 01:20:07 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:20:07.784872 | orchestrator | 2026-01-02 01:20:07 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:20:10.832818 | orchestrator | 2026-01-02 01:20:10 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:20:10.834245 | orchestrator | 2026-01-02 01:20:10 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:20:10.834313 | orchestrator | 2026-01-02 01:20:10 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:20:13.885133 | orchestrator | 2026-01-02 01:20:13 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:20:13.886606 | orchestrator | 2026-01-02 01:20:13 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:20:13.886718 | orchestrator | 2026-01-02 01:20:13 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:20:16.934846 | orchestrator | 2026-01-02 01:20:16 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:20:16.936626 | orchestrator | 2026-01-02 01:20:16 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:20:16.936686 | orchestrator | 2026-01-02 01:20:16 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:20:19.983602 | orchestrator | 2026-01-02 01:20:19 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:20:19.984774 | orchestrator | 2026-01-02 01:20:19 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:20:19.984889 | orchestrator | 2026-01-02 01:20:19 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:20:23.036501 | orchestrator | 2026-01-02 01:20:23 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:20:23.037984 | orchestrator | 2026-01-02 01:20:23 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:20:23.038076 | orchestrator | 2026-01-02 01:20:23 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:20:26.081434 | orchestrator | 2026-01-02 01:20:26 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:20:26.083637 | orchestrator | 2026-01-02 01:20:26 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:20:26.083725 | orchestrator | 2026-01-02 01:20:26 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:20:29.136003 | orchestrator | 2026-01-02 01:20:29 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:20:29.136957 | orchestrator | 2026-01-02 01:20:29 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:20:29.136993 | orchestrator | 2026-01-02 01:20:29 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:20:32.186076 | orchestrator | 2026-01-02 01:20:32 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:20:32.187112 | orchestrator | 2026-01-02 01:20:32 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:20:32.187152 | orchestrator | 2026-01-02 01:20:32 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:20:35.230263 | orchestrator | 2026-01-02 01:20:35 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:20:35.231625 | orchestrator | 2026-01-02 01:20:35 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:20:35.231679 | orchestrator | 2026-01-02 01:20:35 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:20:38.280242 | orchestrator | 2026-01-02 01:20:38 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:20:38.281918 | orchestrator | 2026-01-02 01:20:38 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:20:38.281977 | orchestrator | 2026-01-02 01:20:38 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:20:41.321230 | orchestrator | 2026-01-02 01:20:41 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:20:41.322646 | orchestrator | 2026-01-02 01:20:41 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:20:41.322709 | orchestrator | 2026-01-02 01:20:41 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:20:44.375673 | orchestrator | 2026-01-02 01:20:44 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:20:44.377122 | orchestrator | 2026-01-02 01:20:44 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:20:44.377167 | orchestrator | 2026-01-02 01:20:44 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:20:47.424079 | orchestrator | 2026-01-02 01:20:47 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:20:47.425991 | orchestrator | 2026-01-02 01:20:47 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:20:47.426107 | orchestrator | 2026-01-02 01:20:47 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:20:50.474075 | orchestrator | 2026-01-02 01:20:50 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:20:50.477657 | orchestrator | 2026-01-02 01:20:50 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:20:50.477706 | orchestrator | 2026-01-02 01:20:50 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:20:53.524846 | orchestrator | 2026-01-02 01:20:53 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:20:53.525636 | orchestrator | 2026-01-02 01:20:53 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:20:53.525677 | orchestrator | 2026-01-02 01:20:53 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:20:56.575283 | orchestrator | 2026-01-02 01:20:56 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:20:56.577060 | orchestrator | 2026-01-02 01:20:56 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:20:56.577181 | orchestrator | 2026-01-02 01:20:56 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:20:59.622993 | orchestrator | 2026-01-02 01:20:59 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:20:59.624670 | orchestrator | 2026-01-02 01:20:59 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:20:59.624789 | orchestrator | 2026-01-02 01:20:59 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:21:02.674224 | orchestrator | 2026-01-02 01:21:02 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:21:02.675380 | orchestrator | 2026-01-02 01:21:02 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:21:02.675603 | orchestrator | 2026-01-02 01:21:02 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:21:05.720499 | orchestrator | 2026-01-02 01:21:05 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:21:05.722325 | orchestrator | 2026-01-02 01:21:05 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:21:05.722363 | orchestrator | 2026-01-02 01:21:05 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:21:08.766337 | orchestrator | 2026-01-02 01:21:08 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:21:08.768100 | orchestrator | 2026-01-02 01:21:08 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:21:08.768154 | orchestrator | 2026-01-02 01:21:08 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:21:11.815399 | orchestrator | 2026-01-02 01:21:11 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:21:11.817678 | orchestrator | 2026-01-02 01:21:11 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:21:11.817851 | orchestrator | 2026-01-02 01:21:11 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:21:14.866918 | orchestrator | 2026-01-02 01:21:14 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:21:14.867836 | orchestrator | 2026-01-02 01:21:14 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:21:14.867891 | orchestrator | 2026-01-02 01:21:14 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:21:17.914612 | orchestrator | 2026-01-02 01:21:17 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:21:17.916550 | orchestrator | 2026-01-02 01:21:17 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:21:17.916618 | orchestrator | 2026-01-02 01:21:17 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:21:20.966284 | orchestrator | 2026-01-02 01:21:20 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:21:20.968042 | orchestrator | 2026-01-02 01:21:20 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:21:20.968074 | orchestrator | 2026-01-02 01:21:20 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:21:24.015291 | orchestrator | 2026-01-02 01:21:24 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:21:24.016517 | orchestrator | 2026-01-02 01:21:24 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:21:24.016575 | orchestrator | 2026-01-02 01:21:24 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:21:27.060770 | orchestrator | 2026-01-02 01:21:27 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:21:27.062743 | orchestrator | 2026-01-02 01:21:27 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:21:27.062894 | orchestrator | 2026-01-02 01:21:27 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:21:30.113267 | orchestrator | 2026-01-02 01:21:30 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:21:30.115272 | orchestrator | 2026-01-02 01:21:30 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:21:30.115314 | orchestrator | 2026-01-02 01:21:30 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:21:33.165286 | orchestrator | 2026-01-02 01:21:33 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:21:33.166482 | orchestrator | 2026-01-02 01:21:33 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:21:33.166529 | orchestrator | 2026-01-02 01:21:33 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:21:36.214205 | orchestrator | 2026-01-02 01:21:36 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:21:36.215885 | orchestrator | 2026-01-02 01:21:36 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:21:36.215966 | orchestrator | 2026-01-02 01:21:36 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:21:39.260082 | orchestrator | 2026-01-02 01:21:39 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:21:39.261680 | orchestrator | 2026-01-02 01:21:39 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:21:39.262353 | orchestrator | 2026-01-02 01:21:39 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:21:42.308912 | orchestrator | 2026-01-02 01:21:42 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:21:42.310760 | orchestrator | 2026-01-02 01:21:42 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:21:42.310881 | orchestrator | 2026-01-02 01:21:42 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:21:45.354930 | orchestrator | 2026-01-02 01:21:45 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:21:45.357914 | orchestrator | 2026-01-02 01:21:45 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:21:45.358638 | orchestrator | 2026-01-02 01:21:45 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:21:48.402098 | orchestrator | 2026-01-02 01:21:48 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:21:48.403894 | orchestrator | 2026-01-02 01:21:48 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:21:48.403972 | orchestrator | 2026-01-02 01:21:48 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:21:51.445877 | orchestrator | 2026-01-02 01:21:51 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:21:51.447455 | orchestrator | 2026-01-02 01:21:51 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:21:51.447523 | orchestrator | 2026-01-02 01:21:51 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:21:54.494890 | orchestrator | 2026-01-02 01:21:54 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:21:54.496738 | orchestrator | 2026-01-02 01:21:54 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:21:54.496803 | orchestrator | 2026-01-02 01:21:54 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:21:57.546895 | orchestrator | 2026-01-02 01:21:57 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:21:57.549288 | orchestrator | 2026-01-02 01:21:57 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:21:57.549383 | orchestrator | 2026-01-02 01:21:57 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:22:00.596432 | orchestrator | 2026-01-02 01:22:00 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:22:00.596981 | orchestrator | 2026-01-02 01:22:00 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:22:00.597010 | orchestrator | 2026-01-02 01:22:00 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:22:03.635870 | orchestrator | 2026-01-02 01:22:03 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:22:03.637152 | orchestrator | 2026-01-02 01:22:03 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:22:03.637170 | orchestrator | 2026-01-02 01:22:03 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:22:06.682962 | orchestrator | 2026-01-02 01:22:06 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:22:06.684241 | orchestrator | 2026-01-02 01:22:06 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:22:06.684334 | orchestrator | 2026-01-02 01:22:06 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:22:09.726070 | orchestrator | 2026-01-02 01:22:09 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:22:09.727921 | orchestrator | 2026-01-02 01:22:09 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:22:09.728002 | orchestrator | 2026-01-02 01:22:09 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:22:12.779257 | orchestrator | 2026-01-02 01:22:12 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:22:12.781443 | orchestrator | 2026-01-02 01:22:12 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:22:12.781563 | orchestrator | 2026-01-02 01:22:12 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:22:15.824121 | orchestrator | 2026-01-02 01:22:15 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:22:15.824895 | orchestrator | 2026-01-02 01:22:15 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:22:15.825011 | orchestrator | 2026-01-02 01:22:15 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:22:18.872470 | orchestrator | 2026-01-02 01:22:18 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:22:18.873898 | orchestrator | 2026-01-02 01:22:18 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:22:18.874098 | orchestrator | 2026-01-02 01:22:18 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:22:21.921197 | orchestrator | 2026-01-02 01:22:21 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:22:21.922743 | orchestrator | 2026-01-02 01:22:21 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:22:21.922879 | orchestrator | 2026-01-02 01:22:21 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:22:24.974530 | orchestrator | 2026-01-02 01:22:24 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:22:24.976123 | orchestrator | 2026-01-02 01:22:24 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:22:24.976148 | orchestrator | 2026-01-02 01:22:24 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:22:28.027189 | orchestrator | 2026-01-02 01:22:28 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:22:28.029437 | orchestrator | 2026-01-02 01:22:28 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:22:28.029476 | orchestrator | 2026-01-02 01:22:28 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:22:31.077225 | orchestrator | 2026-01-02 01:22:31 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:22:31.078633 | orchestrator | 2026-01-02 01:22:31 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:22:31.078676 | orchestrator | 2026-01-02 01:22:31 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:22:34.122498 | orchestrator | 2026-01-02 01:22:34 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:22:34.123375 | orchestrator | 2026-01-02 01:22:34 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:22:34.123407 | orchestrator | 2026-01-02 01:22:34 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:22:37.168190 | orchestrator | 2026-01-02 01:22:37 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:22:37.171024 | orchestrator | 2026-01-02 01:22:37 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:22:37.171403 | orchestrator | 2026-01-02 01:22:37 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:22:40.231987 | orchestrator | 2026-01-02 01:22:40 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:22:40.235782 | orchestrator | 2026-01-02 01:22:40 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:22:40.235859 | orchestrator | 2026-01-02 01:22:40 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:22:43.291142 | orchestrator | 2026-01-02 01:22:43 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:22:43.292480 | orchestrator | 2026-01-02 01:22:43 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:22:43.292533 | orchestrator | 2026-01-02 01:22:43 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:22:46.338609 | orchestrator | 2026-01-02 01:22:46 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:22:46.339729 | orchestrator | 2026-01-02 01:22:46 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:22:46.339760 | orchestrator | 2026-01-02 01:22:46 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:22:49.395558 | orchestrator | 2026-01-02 01:22:49 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:22:49.397435 | orchestrator | 2026-01-02 01:22:49 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:22:49.397530 | orchestrator | 2026-01-02 01:22:49 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:22:52.448451 | orchestrator | 2026-01-02 01:22:52 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:22:52.451428 | orchestrator | 2026-01-02 01:22:52 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:22:52.451470 | orchestrator | 2026-01-02 01:22:52 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:22:55.497235 | orchestrator | 2026-01-02 01:22:55 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:22:55.497730 | orchestrator | 2026-01-02 01:22:55 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:22:55.497751 | orchestrator | 2026-01-02 01:22:55 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:22:58.544557 | orchestrator | 2026-01-02 01:22:58 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:22:58.545579 | orchestrator | 2026-01-02 01:22:58 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:22:58.545628 | orchestrator | 2026-01-02 01:22:58 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:23:01.584837 | orchestrator | 2026-01-02 01:23:01 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:23:01.586120 | orchestrator | 2026-01-02 01:23:01 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:23:01.586167 | orchestrator | 2026-01-02 01:23:01 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:23:04.634296 | orchestrator | 2026-01-02 01:23:04 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:23:04.635444 | orchestrator | 2026-01-02 01:23:04 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:23:04.635487 | orchestrator | 2026-01-02 01:23:04 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:23:07.679571 | orchestrator | 2026-01-02 01:23:07 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:23:07.681348 | orchestrator | 2026-01-02 01:23:07 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:23:07.681383 | orchestrator | 2026-01-02 01:23:07 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:23:10.729167 | orchestrator | 2026-01-02 01:23:10 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:23:10.730458 | orchestrator | 2026-01-02 01:23:10 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:23:10.730489 | orchestrator | 2026-01-02 01:23:10 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:23:13.778413 | orchestrator | 2026-01-02 01:23:13 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:23:13.780461 | orchestrator | 2026-01-02 01:23:13 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:23:13.780490 | orchestrator | 2026-01-02 01:23:13 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:23:16.828304 | orchestrator | 2026-01-02 01:23:16 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:23:16.829997 | orchestrator | 2026-01-02 01:23:16 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:23:16.830148 | orchestrator | 2026-01-02 01:23:16 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:23:19.881749 | orchestrator | 2026-01-02 01:23:19 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:23:19.882920 | orchestrator | 2026-01-02 01:23:19 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:23:19.882999 | orchestrator | 2026-01-02 01:23:19 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:23:22.921913 | orchestrator | 2026-01-02 01:23:22 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:23:22.922520 | orchestrator | 2026-01-02 01:23:22 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:23:22.922597 | orchestrator | 2026-01-02 01:23:22 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:23:25.975914 | orchestrator | 2026-01-02 01:23:25 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:23:25.977675 | orchestrator | 2026-01-02 01:23:25 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:23:25.977751 | orchestrator | 2026-01-02 01:23:25 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:23:29.031811 | orchestrator | 2026-01-02 01:23:29 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:23:29.031931 | orchestrator | 2026-01-02 01:23:29 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:23:29.031957 | orchestrator | 2026-01-02 01:23:29 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:23:32.080654 | orchestrator | 2026-01-02 01:23:32 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:23:32.081034 | orchestrator | 2026-01-02 01:23:32 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:23:32.081146 | orchestrator | 2026-01-02 01:23:32 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:23:35.137758 | orchestrator | 2026-01-02 01:23:35 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:23:35.137894 | orchestrator | 2026-01-02 01:23:35 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:23:35.137906 | orchestrator | 2026-01-02 01:23:35 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:23:38.177211 | orchestrator | 2026-01-02 01:23:38 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:23:38.178285 | orchestrator | 2026-01-02 01:23:38 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:23:38.178402 | orchestrator | 2026-01-02 01:23:38 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:23:41.218587 | orchestrator | 2026-01-02 01:23:41 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:23:41.219724 | orchestrator | 2026-01-02 01:23:41 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:23:41.219990 | orchestrator | 2026-01-02 01:23:41 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:23:44.263288 | orchestrator | 2026-01-02 01:23:44 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:23:44.264585 | orchestrator | 2026-01-02 01:23:44 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:23:44.264618 | orchestrator | 2026-01-02 01:23:44 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:23:47.311497 | orchestrator | 2026-01-02 01:23:47 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:23:47.311996 | orchestrator | 2026-01-02 01:23:47 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:23:47.312405 | orchestrator | 2026-01-02 01:23:47 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:23:50.361195 | orchestrator | 2026-01-02 01:23:50 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:23:50.363073 | orchestrator | 2026-01-02 01:23:50 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:23:50.363146 | orchestrator | 2026-01-02 01:23:50 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:23:53.409385 | orchestrator | 2026-01-02 01:23:53 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:23:53.411832 | orchestrator | 2026-01-02 01:23:53 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:23:53.411887 | orchestrator | 2026-01-02 01:23:53 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:23:56.464411 | orchestrator | 2026-01-02 01:23:56 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:23:56.466589 | orchestrator | 2026-01-02 01:23:56 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:23:56.466789 | orchestrator | 2026-01-02 01:23:56 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:23:59.513821 | orchestrator | 2026-01-02 01:23:59 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:23:59.515337 | orchestrator | 2026-01-02 01:23:59 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:23:59.515402 | orchestrator | 2026-01-02 01:23:59 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:24:02.566859 | orchestrator | 2026-01-02 01:24:02 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:24:02.569418 | orchestrator | 2026-01-02 01:24:02 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:24:02.569474 | orchestrator | 2026-01-02 01:24:02 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:24:05.617300 | orchestrator | 2026-01-02 01:24:05 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:24:05.619310 | orchestrator | 2026-01-02 01:24:05 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:24:05.619358 | orchestrator | 2026-01-02 01:24:05 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:24:08.673369 | orchestrator | 2026-01-02 01:24:08 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:24:08.674728 | orchestrator | 2026-01-02 01:24:08 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:24:08.674762 | orchestrator | 2026-01-02 01:24:08 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:24:11.724104 | orchestrator | 2026-01-02 01:24:11 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:24:11.726319 | orchestrator | 2026-01-02 01:24:11 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:24:11.726786 | orchestrator | 2026-01-02 01:24:11 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:24:14.778278 | orchestrator | 2026-01-02 01:24:14 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:24:14.779853 | orchestrator | 2026-01-02 01:24:14 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:24:14.779918 | orchestrator | 2026-01-02 01:24:14 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:24:17.832643 | orchestrator | 2026-01-02 01:24:17 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:24:17.835040 | orchestrator | 2026-01-02 01:24:17 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:24:17.835111 | orchestrator | 2026-01-02 01:24:17 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:24:20.886250 | orchestrator | 2026-01-02 01:24:20 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:24:20.887516 | orchestrator | 2026-01-02 01:24:20 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:24:20.887555 | orchestrator | 2026-01-02 01:24:20 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:24:23.935865 | orchestrator | 2026-01-02 01:24:23 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:24:23.938365 | orchestrator | 2026-01-02 01:24:23 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:24:23.938451 | orchestrator | 2026-01-02 01:24:23 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:24:26.984401 | orchestrator | 2026-01-02 01:24:26 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:24:26.988352 | orchestrator | 2026-01-02 01:24:26 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:24:26.988454 | orchestrator | 2026-01-02 01:24:26 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:24:30.033321 | orchestrator | 2026-01-02 01:24:30 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:24:30.034170 | orchestrator | 2026-01-02 01:24:30 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:24:30.034221 | orchestrator | 2026-01-02 01:24:30 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:24:33.079168 | orchestrator | 2026-01-02 01:24:33 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:24:33.079848 | orchestrator | 2026-01-02 01:24:33 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:24:33.079987 | orchestrator | 2026-01-02 01:24:33 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:24:36.126302 | orchestrator | 2026-01-02 01:24:36 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:24:36.127868 | orchestrator | 2026-01-02 01:24:36 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:24:36.127923 | orchestrator | 2026-01-02 01:24:36 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:24:39.173503 | orchestrator | 2026-01-02 01:24:39 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:24:39.175738 | orchestrator | 2026-01-02 01:24:39 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:24:39.175779 | orchestrator | 2026-01-02 01:24:39 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:24:42.225121 | orchestrator | 2026-01-02 01:24:42 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:24:42.227402 | orchestrator | 2026-01-02 01:24:42 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:24:42.227477 | orchestrator | 2026-01-02 01:24:42 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:24:45.275862 | orchestrator | 2026-01-02 01:24:45 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:24:45.277334 | orchestrator | 2026-01-02 01:24:45 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:24:45.277387 | orchestrator | 2026-01-02 01:24:45 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:24:48.324305 | orchestrator | 2026-01-02 01:24:48 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:24:48.326128 | orchestrator | 2026-01-02 01:24:48 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:24:48.326229 | orchestrator | 2026-01-02 01:24:48 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:24:51.377150 | orchestrator | 2026-01-02 01:24:51 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:24:51.378637 | orchestrator | 2026-01-02 01:24:51 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:24:51.378743 | orchestrator | 2026-01-02 01:24:51 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:24:54.427199 | orchestrator | 2026-01-02 01:24:54 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:24:54.428555 | orchestrator | 2026-01-02 01:24:54 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:24:54.428592 | orchestrator | 2026-01-02 01:24:54 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:24:57.478491 | orchestrator | 2026-01-02 01:24:57 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:24:57.480390 | orchestrator | 2026-01-02 01:24:57 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:24:57.480525 | orchestrator | 2026-01-02 01:24:57 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:25:00.526614 | orchestrator | 2026-01-02 01:25:00 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:25:00.527210 | orchestrator | 2026-01-02 01:25:00 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:25:00.527356 | orchestrator | 2026-01-02 01:25:00 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:25:03.571960 | orchestrator | 2026-01-02 01:25:03 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:25:03.572864 | orchestrator | 2026-01-02 01:25:03 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:25:03.572887 | orchestrator | 2026-01-02 01:25:03 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:25:06.615736 | orchestrator | 2026-01-02 01:25:06 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:25:06.617941 | orchestrator | 2026-01-02 01:25:06 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:25:06.618081 | orchestrator | 2026-01-02 01:25:06 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:25:09.668954 | orchestrator | 2026-01-02 01:25:09 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:25:09.670697 | orchestrator | 2026-01-02 01:25:09 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:25:09.670776 | orchestrator | 2026-01-02 01:25:09 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:25:12.720003 | orchestrator | 2026-01-02 01:25:12 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:25:12.722276 | orchestrator | 2026-01-02 01:25:12 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:25:12.722584 | orchestrator | 2026-01-02 01:25:12 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:25:15.769155 | orchestrator | 2026-01-02 01:25:15 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:25:15.771241 | orchestrator | 2026-01-02 01:25:15 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:25:15.771622 | orchestrator | 2026-01-02 01:25:15 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:25:18.822010 | orchestrator | 2026-01-02 01:25:18 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:25:18.823978 | orchestrator | 2026-01-02 01:25:18 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:25:18.824171 | orchestrator | 2026-01-02 01:25:18 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:25:21.867962 | orchestrator | 2026-01-02 01:25:21 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:25:21.869807 | orchestrator | 2026-01-02 01:25:21 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:25:21.870143 | orchestrator | 2026-01-02 01:25:21 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:25:24.919987 | orchestrator | 2026-01-02 01:25:24 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:25:24.921646 | orchestrator | 2026-01-02 01:25:24 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:25:24.921708 | orchestrator | 2026-01-02 01:25:24 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:25:27.969763 | orchestrator | 2026-01-02 01:25:27 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:25:27.971637 | orchestrator | 2026-01-02 01:25:27 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:25:27.971717 | orchestrator | 2026-01-02 01:25:27 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:25:31.021472 | orchestrator | 2026-01-02 01:25:31 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:25:31.023120 | orchestrator | 2026-01-02 01:25:31 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:25:31.023148 | orchestrator | 2026-01-02 01:25:31 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:25:34.083553 | orchestrator | 2026-01-02 01:25:34 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:25:34.085361 | orchestrator | 2026-01-02 01:25:34 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:25:34.085415 | orchestrator | 2026-01-02 01:25:34 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:25:37.130287 | orchestrator | 2026-01-02 01:25:37 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:25:37.131265 | orchestrator | 2026-01-02 01:25:37 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:25:37.131292 | orchestrator | 2026-01-02 01:25:37 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:25:40.172189 | orchestrator | 2026-01-02 01:25:40 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:25:40.174271 | orchestrator | 2026-01-02 01:25:40 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:25:40.174775 | orchestrator | 2026-01-02 01:25:40 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:25:43.222102 | orchestrator | 2026-01-02 01:25:43 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:25:43.224535 | orchestrator | 2026-01-02 01:25:43 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:25:43.224574 | orchestrator | 2026-01-02 01:25:43 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:25:46.274171 | orchestrator | 2026-01-02 01:25:46 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:25:46.275419 | orchestrator | 2026-01-02 01:25:46 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:25:46.275720 | orchestrator | 2026-01-02 01:25:46 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:25:49.326494 | orchestrator | 2026-01-02 01:25:49 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:25:49.327771 | orchestrator | 2026-01-02 01:25:49 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:25:49.327816 | orchestrator | 2026-01-02 01:25:49 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:25:52.373817 | orchestrator | 2026-01-02 01:25:52 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:25:52.376644 | orchestrator | 2026-01-02 01:25:52 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:25:52.376953 | orchestrator | 2026-01-02 01:25:52 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:25:55.426298 | orchestrator | 2026-01-02 01:25:55 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:25:55.427749 | orchestrator | 2026-01-02 01:25:55 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:25:55.427848 | orchestrator | 2026-01-02 01:25:55 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:25:58.473345 | orchestrator | 2026-01-02 01:25:58 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:25:58.476511 | orchestrator | 2026-01-02 01:25:58 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:25:58.476554 | orchestrator | 2026-01-02 01:25:58 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:26:01.523592 | orchestrator | 2026-01-02 01:26:01 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:26:01.527350 | orchestrator | 2026-01-02 01:26:01 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:26:01.527619 | orchestrator | 2026-01-02 01:26:01 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:26:04.576645 | orchestrator | 2026-01-02 01:26:04 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:26:04.578206 | orchestrator | 2026-01-02 01:26:04 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:26:04.578261 | orchestrator | 2026-01-02 01:26:04 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:26:07.625600 | orchestrator | 2026-01-02 01:26:07 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:26:07.627384 | orchestrator | 2026-01-02 01:26:07 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:26:07.627451 | orchestrator | 2026-01-02 01:26:07 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:26:10.679595 | orchestrator | 2026-01-02 01:26:10 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:26:10.681412 | orchestrator | 2026-01-02 01:26:10 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:26:10.681504 | orchestrator | 2026-01-02 01:26:10 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:26:13.729727 | orchestrator | 2026-01-02 01:26:13 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:26:13.733191 | orchestrator | 2026-01-02 01:26:13 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:26:13.733280 | orchestrator | 2026-01-02 01:26:13 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:26:16.780116 | orchestrator | 2026-01-02 01:26:16 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:26:16.784165 | orchestrator | 2026-01-02 01:26:16 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:26:16.784281 | orchestrator | 2026-01-02 01:26:16 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:26:19.835901 | orchestrator | 2026-01-02 01:26:19 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:26:19.837409 | orchestrator | 2026-01-02 01:26:19 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:26:19.837488 | orchestrator | 2026-01-02 01:26:19 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:26:22.885274 | orchestrator | 2026-01-02 01:26:22 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:26:22.886224 | orchestrator | 2026-01-02 01:26:22 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:26:22.886377 | orchestrator | 2026-01-02 01:26:22 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:26:25.937568 | orchestrator | 2026-01-02 01:26:25 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:26:25.940197 | orchestrator | 2026-01-02 01:26:25 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:26:25.940260 | orchestrator | 2026-01-02 01:26:25 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:26:28.986286 | orchestrator | 2026-01-02 01:26:28 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:26:28.986909 | orchestrator | 2026-01-02 01:26:28 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:26:28.986944 | orchestrator | 2026-01-02 01:26:28 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:26:32.028865 | orchestrator | 2026-01-02 01:26:32 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:26:32.029950 | orchestrator | 2026-01-02 01:26:32 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:26:32.029974 | orchestrator | 2026-01-02 01:26:32 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:26:35.066693 | orchestrator | 2026-01-02 01:26:35 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:26:35.068237 | orchestrator | 2026-01-02 01:26:35 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:26:35.068300 | orchestrator | 2026-01-02 01:26:35 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:26:38.119650 | orchestrator | 2026-01-02 01:26:38 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:26:38.121481 | orchestrator | 2026-01-02 01:26:38 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:26:38.121713 | orchestrator | 2026-01-02 01:26:38 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:26:41.173958 | orchestrator | 2026-01-02 01:26:41 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:26:41.175882 | orchestrator | 2026-01-02 01:26:41 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:26:41.175927 | orchestrator | 2026-01-02 01:26:41 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:26:44.224756 | orchestrator | 2026-01-02 01:26:44 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:26:44.225894 | orchestrator | 2026-01-02 01:26:44 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:26:44.226068 | orchestrator | 2026-01-02 01:26:44 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:26:47.271988 | orchestrator | 2026-01-02 01:26:47 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:26:47.273583 | orchestrator | 2026-01-02 01:26:47 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:26:47.273743 | orchestrator | 2026-01-02 01:26:47 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:26:50.322005 | orchestrator | 2026-01-02 01:26:50 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:26:50.322918 | orchestrator | 2026-01-02 01:26:50 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:26:50.323109 | orchestrator | 2026-01-02 01:26:50 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:26:53.364318 | orchestrator | 2026-01-02 01:26:53 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:26:53.367022 | orchestrator | 2026-01-02 01:26:53 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:26:53.367097 | orchestrator | 2026-01-02 01:26:53 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:26:56.415652 | orchestrator | 2026-01-02 01:26:56 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:26:56.417395 | orchestrator | 2026-01-02 01:26:56 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:26:56.417919 | orchestrator | 2026-01-02 01:26:56 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:26:59.463879 | orchestrator | 2026-01-02 01:26:59 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:26:59.465627 | orchestrator | 2026-01-02 01:26:59 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:26:59.465707 | orchestrator | 2026-01-02 01:26:59 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:27:02.514811 | orchestrator | 2026-01-02 01:27:02 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:27:02.516256 | orchestrator | 2026-01-02 01:27:02 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:27:02.516748 | orchestrator | 2026-01-02 01:27:02 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:27:05.561301 | orchestrator | 2026-01-02 01:27:05 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:27:05.562586 | orchestrator | 2026-01-02 01:27:05 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:27:05.562654 | orchestrator | 2026-01-02 01:27:05 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:27:08.609836 | orchestrator | 2026-01-02 01:27:08 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:27:08.611413 | orchestrator | 2026-01-02 01:27:08 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:27:08.611521 | orchestrator | 2026-01-02 01:27:08 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:27:11.661549 | orchestrator | 2026-01-02 01:27:11 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:27:11.662920 | orchestrator | 2026-01-02 01:27:11 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:27:11.663035 | orchestrator | 2026-01-02 01:27:11 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:27:14.708008 | orchestrator | 2026-01-02 01:27:14 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:27:14.709058 | orchestrator | 2026-01-02 01:27:14 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:27:14.709092 | orchestrator | 2026-01-02 01:27:14 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:27:17.750116 | orchestrator | 2026-01-02 01:27:17 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:27:17.752232 | orchestrator | 2026-01-02 01:27:17 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:27:17.752287 | orchestrator | 2026-01-02 01:27:17 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:27:20.799714 | orchestrator | 2026-01-02 01:27:20 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:27:20.800841 | orchestrator | 2026-01-02 01:27:20 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:27:20.800886 | orchestrator | 2026-01-02 01:27:20 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:27:23.842345 | orchestrator | 2026-01-02 01:27:23 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:27:23.844239 | orchestrator | 2026-01-02 01:27:23 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:27:23.844409 | orchestrator | 2026-01-02 01:27:23 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:27:26.890781 | orchestrator | 2026-01-02 01:27:26 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:27:26.892514 | orchestrator | 2026-01-02 01:27:26 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:27:26.892621 | orchestrator | 2026-01-02 01:27:26 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:27:29.940531 | orchestrator | 2026-01-02 01:27:29 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:27:29.941384 | orchestrator | 2026-01-02 01:27:29 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:27:29.941443 | orchestrator | 2026-01-02 01:27:29 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:27:32.988207 | orchestrator | 2026-01-02 01:27:32 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:27:32.989628 | orchestrator | 2026-01-02 01:27:32 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:27:32.989712 | orchestrator | 2026-01-02 01:27:32 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:27:36.041881 | orchestrator | 2026-01-02 01:27:36 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:27:36.044196 | orchestrator | 2026-01-02 01:27:36 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:27:36.044255 | orchestrator | 2026-01-02 01:27:36 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:27:39.088017 | orchestrator | 2026-01-02 01:27:39 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:27:39.090115 | orchestrator | 2026-01-02 01:27:39 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:27:39.090174 | orchestrator | 2026-01-02 01:27:39 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:27:42.131623 | orchestrator | 2026-01-02 01:27:42 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:27:42.132772 | orchestrator | 2026-01-02 01:27:42 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:27:42.132974 | orchestrator | 2026-01-02 01:27:42 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:27:45.179793 | orchestrator | 2026-01-02 01:27:45 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:27:45.181132 | orchestrator | 2026-01-02 01:27:45 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:27:45.181175 | orchestrator | 2026-01-02 01:27:45 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:27:48.226332 | orchestrator | 2026-01-02 01:27:48 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:27:48.227519 | orchestrator | 2026-01-02 01:27:48 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:27:48.227723 | orchestrator | 2026-01-02 01:27:48 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:27:51.271556 | orchestrator | 2026-01-02 01:27:51 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:27:51.273930 | orchestrator | 2026-01-02 01:27:51 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:27:51.274007 | orchestrator | 2026-01-02 01:27:51 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:27:54.319239 | orchestrator | 2026-01-02 01:27:54 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:27:54.320484 | orchestrator | 2026-01-02 01:27:54 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:27:54.320630 | orchestrator | 2026-01-02 01:27:54 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:27:57.370893 | orchestrator | 2026-01-02 01:27:57 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:27:57.372212 | orchestrator | 2026-01-02 01:27:57 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:27:57.372247 | orchestrator | 2026-01-02 01:27:57 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:28:00.418616 | orchestrator | 2026-01-02 01:28:00 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:28:00.420922 | orchestrator | 2026-01-02 01:28:00 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:28:00.421002 | orchestrator | 2026-01-02 01:28:00 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:28:03.468297 | orchestrator | 2026-01-02 01:28:03 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:28:03.470173 | orchestrator | 2026-01-02 01:28:03 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:28:03.470305 | orchestrator | 2026-01-02 01:28:03 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:28:06.514333 | orchestrator | 2026-01-02 01:28:06 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:28:06.516165 | orchestrator | 2026-01-02 01:28:06 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:28:06.516209 | orchestrator | 2026-01-02 01:28:06 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:28:09.557374 | orchestrator | 2026-01-02 01:28:09 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:28:09.559411 | orchestrator | 2026-01-02 01:28:09 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:28:09.559449 | orchestrator | 2026-01-02 01:28:09 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:28:12.609973 | orchestrator | 2026-01-02 01:28:12 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:28:12.612049 | orchestrator | 2026-01-02 01:28:12 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:28:12.612087 | orchestrator | 2026-01-02 01:28:12 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:28:15.658757 | orchestrator | 2026-01-02 01:28:15 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:28:15.660360 | orchestrator | 2026-01-02 01:28:15 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:28:15.660750 | orchestrator | 2026-01-02 01:28:15 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:28:18.705213 | orchestrator | 2026-01-02 01:28:18 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:28:18.707314 | orchestrator | 2026-01-02 01:28:18 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:28:18.707457 | orchestrator | 2026-01-02 01:28:18 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:28:21.754264 | orchestrator | 2026-01-02 01:28:21 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:28:21.756018 | orchestrator | 2026-01-02 01:28:21 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:28:21.756061 | orchestrator | 2026-01-02 01:28:21 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:28:24.805470 | orchestrator | 2026-01-02 01:28:24 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:28:24.807587 | orchestrator | 2026-01-02 01:28:24 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:28:24.807635 | orchestrator | 2026-01-02 01:28:24 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:28:27.852247 | orchestrator | 2026-01-02 01:28:27 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:28:27.853277 | orchestrator | 2026-01-02 01:28:27 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:28:27.853404 | orchestrator | 2026-01-02 01:28:27 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:28:30.901917 | orchestrator | 2026-01-02 01:28:30 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:28:30.903931 | orchestrator | 2026-01-02 01:28:30 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:28:30.904110 | orchestrator | 2026-01-02 01:28:30 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:28:33.949935 | orchestrator | 2026-01-02 01:28:33 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:28:33.952201 | orchestrator | 2026-01-02 01:28:33 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:28:33.952254 | orchestrator | 2026-01-02 01:28:33 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:28:37.000368 | orchestrator | 2026-01-02 01:28:36 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:28:37.004231 | orchestrator | 2026-01-02 01:28:37 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:28:37.004590 | orchestrator | 2026-01-02 01:28:37 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:28:40.050211 | orchestrator | 2026-01-02 01:28:40 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:28:40.051928 | orchestrator | 2026-01-02 01:28:40 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:28:40.051966 | orchestrator | 2026-01-02 01:28:40 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:28:43.090284 | orchestrator | 2026-01-02 01:28:43 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:28:43.092041 | orchestrator | 2026-01-02 01:28:43 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:28:43.092080 | orchestrator | 2026-01-02 01:28:43 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:28:46.139810 | orchestrator | 2026-01-02 01:28:46 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:28:46.141024 | orchestrator | 2026-01-02 01:28:46 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:28:46.141065 | orchestrator | 2026-01-02 01:28:46 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:28:49.188072 | orchestrator | 2026-01-02 01:28:49 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:28:49.189810 | orchestrator | 2026-01-02 01:28:49 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:28:49.189876 | orchestrator | 2026-01-02 01:28:49 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:28:52.240009 | orchestrator | 2026-01-02 01:28:52 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:28:52.241874 | orchestrator | 2026-01-02 01:28:52 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:28:52.241916 | orchestrator | 2026-01-02 01:28:52 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:28:55.286122 | orchestrator | 2026-01-02 01:28:55 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:28:55.288014 | orchestrator | 2026-01-02 01:28:55 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:28:55.288054 | orchestrator | 2026-01-02 01:28:55 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:28:58.333370 | orchestrator | 2026-01-02 01:28:58 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:28:58.335770 | orchestrator | 2026-01-02 01:28:58 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:28:58.335858 | orchestrator | 2026-01-02 01:28:58 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:29:01.375413 | orchestrator | 2026-01-02 01:29:01 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:29:01.376793 | orchestrator | 2026-01-02 01:29:01 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:29:01.376842 | orchestrator | 2026-01-02 01:29:01 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:29:04.423416 | orchestrator | 2026-01-02 01:29:04 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:29:04.425435 | orchestrator | 2026-01-02 01:29:04 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:29:04.425481 | orchestrator | 2026-01-02 01:29:04 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:29:07.476375 | orchestrator | 2026-01-02 01:29:07 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:29:07.477754 | orchestrator | 2026-01-02 01:29:07 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:29:07.478100 | orchestrator | 2026-01-02 01:29:07 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:29:10.521409 | orchestrator | 2026-01-02 01:29:10 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:29:10.523530 | orchestrator | 2026-01-02 01:29:10 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:29:10.523580 | orchestrator | 2026-01-02 01:29:10 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:29:13.567630 | orchestrator | 2026-01-02 01:29:13 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:29:13.570702 | orchestrator | 2026-01-02 01:29:13 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:29:13.570754 | orchestrator | 2026-01-02 01:29:13 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:29:16.620212 | orchestrator | 2026-01-02 01:29:16 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:29:16.623304 | orchestrator | 2026-01-02 01:29:16 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:29:16.623595 | orchestrator | 2026-01-02 01:29:16 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:29:19.669945 | orchestrator | 2026-01-02 01:29:19 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:29:19.671349 | orchestrator | 2026-01-02 01:29:19 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:29:19.671381 | orchestrator | 2026-01-02 01:29:19 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:29:22.715773 | orchestrator | 2026-01-02 01:29:22 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:29:22.717756 | orchestrator | 2026-01-02 01:29:22 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:29:22.717888 | orchestrator | 2026-01-02 01:29:22 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:29:25.763924 | orchestrator | 2026-01-02 01:29:25 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:29:25.766135 | orchestrator | 2026-01-02 01:29:25 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:29:25.766168 | orchestrator | 2026-01-02 01:29:25 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:29:28.815166 | orchestrator | 2026-01-02 01:29:28 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:29:28.817058 | orchestrator | 2026-01-02 01:29:28 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:29:28.817176 | orchestrator | 2026-01-02 01:29:28 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:29:31.864787 | orchestrator | 2026-01-02 01:29:31 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:29:31.866606 | orchestrator | 2026-01-02 01:29:31 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:29:31.866668 | orchestrator | 2026-01-02 01:29:31 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:29:34.914160 | orchestrator | 2026-01-02 01:29:34 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:29:34.915979 | orchestrator | 2026-01-02 01:29:34 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:29:34.916031 | orchestrator | 2026-01-02 01:29:34 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:29:37.962603 | orchestrator | 2026-01-02 01:29:37 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:29:37.963873 | orchestrator | 2026-01-02 01:29:37 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:29:37.964403 | orchestrator | 2026-01-02 01:29:37 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:29:41.009995 | orchestrator | 2026-01-02 01:29:41 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:29:41.011764 | orchestrator | 2026-01-02 01:29:41 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:29:41.011846 | orchestrator | 2026-01-02 01:29:41 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:29:44.060713 | orchestrator | 2026-01-02 01:29:44 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:29:44.062290 | orchestrator | 2026-01-02 01:29:44 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:29:44.062382 | orchestrator | 2026-01-02 01:29:44 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:29:47.113750 | orchestrator | 2026-01-02 01:29:47 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:29:47.116118 | orchestrator | 2026-01-02 01:29:47 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:29:47.116423 | orchestrator | 2026-01-02 01:29:47 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:29:50.163759 | orchestrator | 2026-01-02 01:29:50 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:29:50.165760 | orchestrator | 2026-01-02 01:29:50 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:29:50.165895 | orchestrator | 2026-01-02 01:29:50 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:29:53.215605 | orchestrator | 2026-01-02 01:29:53 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:29:53.217848 | orchestrator | 2026-01-02 01:29:53 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:29:53.217895 | orchestrator | 2026-01-02 01:29:53 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:29:56.261619 | orchestrator | 2026-01-02 01:29:56 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:29:56.264694 | orchestrator | 2026-01-02 01:29:56 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:29:56.264782 | orchestrator | 2026-01-02 01:29:56 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:29:59.311473 | orchestrator | 2026-01-02 01:29:59 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:29:59.312532 | orchestrator | 2026-01-02 01:29:59 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:29:59.312585 | orchestrator | 2026-01-02 01:29:59 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:30:02.361008 | orchestrator | 2026-01-02 01:30:02 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:30:02.362227 | orchestrator | 2026-01-02 01:30:02 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:30:02.362397 | orchestrator | 2026-01-02 01:30:02 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:30:05.410205 | orchestrator | 2026-01-02 01:30:05 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:30:05.411788 | orchestrator | 2026-01-02 01:30:05 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:30:05.411840 | orchestrator | 2026-01-02 01:30:05 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:30:08.459547 | orchestrator | 2026-01-02 01:30:08 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:30:08.461314 | orchestrator | 2026-01-02 01:30:08 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:30:08.461357 | orchestrator | 2026-01-02 01:30:08 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:30:11.505220 | orchestrator | 2026-01-02 01:30:11 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:30:11.505382 | orchestrator | 2026-01-02 01:30:11 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:30:11.505403 | orchestrator | 2026-01-02 01:30:11 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:30:14.553739 | orchestrator | 2026-01-02 01:30:14 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:30:14.555720 | orchestrator | 2026-01-02 01:30:14 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:30:14.555816 | orchestrator | 2026-01-02 01:30:14 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:30:17.605269 | orchestrator | 2026-01-02 01:30:17 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:30:17.607441 | orchestrator | 2026-01-02 01:30:17 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:30:17.607485 | orchestrator | 2026-01-02 01:30:17 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:30:20.651581 | orchestrator | 2026-01-02 01:30:20 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:30:20.652032 | orchestrator | 2026-01-02 01:30:20 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:30:20.652058 | orchestrator | 2026-01-02 01:30:20 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:30:23.693153 | orchestrator | 2026-01-02 01:30:23 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:30:23.693561 | orchestrator | 2026-01-02 01:30:23 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:30:23.693965 | orchestrator | 2026-01-02 01:30:23 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:30:26.734149 | orchestrator | 2026-01-02 01:30:26 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:30:26.734755 | orchestrator | 2026-01-02 01:30:26 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:30:26.734828 | orchestrator | 2026-01-02 01:30:26 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:30:29.779495 | orchestrator | 2026-01-02 01:30:29 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:30:29.782466 | orchestrator | 2026-01-02 01:30:29 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:30:29.782504 | orchestrator | 2026-01-02 01:30:29 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:30:32.831912 | orchestrator | 2026-01-02 01:30:32 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:30:32.835081 | orchestrator | 2026-01-02 01:30:32 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:30:32.835316 | orchestrator | 2026-01-02 01:30:32 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:30:35.881141 | orchestrator | 2026-01-02 01:30:35 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:30:35.882432 | orchestrator | 2026-01-02 01:30:35 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:30:35.882480 | orchestrator | 2026-01-02 01:30:35 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:30:38.926855 | orchestrator | 2026-01-02 01:30:38 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:30:38.929874 | orchestrator | 2026-01-02 01:30:38 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:30:38.930181 | orchestrator | 2026-01-02 01:30:38 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:30:41.981326 | orchestrator | 2026-01-02 01:30:41 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:30:41.983040 | orchestrator | 2026-01-02 01:30:41 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:30:41.983087 | orchestrator | 2026-01-02 01:30:41 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:30:45.032837 | orchestrator | 2026-01-02 01:30:45 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:30:45.036868 | orchestrator | 2026-01-02 01:30:45 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:30:45.036967 | orchestrator | 2026-01-02 01:30:45 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:30:48.088351 | orchestrator | 2026-01-02 01:30:48 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:30:48.089003 | orchestrator | 2026-01-02 01:30:48 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:30:48.089026 | orchestrator | 2026-01-02 01:30:48 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:30:51.135504 | orchestrator | 2026-01-02 01:30:51 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:30:51.137706 | orchestrator | 2026-01-02 01:30:51 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:30:51.137770 | orchestrator | 2026-01-02 01:30:51 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:30:54.186243 | orchestrator | 2026-01-02 01:30:54 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:30:54.187786 | orchestrator | 2026-01-02 01:30:54 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:30:54.187824 | orchestrator | 2026-01-02 01:30:54 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:30:57.232183 | orchestrator | 2026-01-02 01:30:57 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:30:57.233136 | orchestrator | 2026-01-02 01:30:57 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:30:57.233198 | orchestrator | 2026-01-02 01:30:57 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:31:00.279752 | orchestrator | 2026-01-02 01:31:00 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:31:00.281589 | orchestrator | 2026-01-02 01:31:00 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:31:00.281685 | orchestrator | 2026-01-02 01:31:00 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:31:03.332465 | orchestrator | 2026-01-02 01:31:03 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:31:03.333783 | orchestrator | 2026-01-02 01:31:03 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:31:03.334255 | orchestrator | 2026-01-02 01:31:03 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:31:06.377536 | orchestrator | 2026-01-02 01:31:06 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:31:06.379094 | orchestrator | 2026-01-02 01:31:06 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:31:06.379155 | orchestrator | 2026-01-02 01:31:06 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:31:09.423549 | orchestrator | 2026-01-02 01:31:09 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:31:09.424463 | orchestrator | 2026-01-02 01:31:09 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:31:09.424833 | orchestrator | 2026-01-02 01:31:09 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:31:12.462279 | orchestrator | 2026-01-02 01:31:12 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:31:12.463447 | orchestrator | 2026-01-02 01:31:12 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:31:12.463608 | orchestrator | 2026-01-02 01:31:12 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:31:15.510210 | orchestrator | 2026-01-02 01:31:15 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:31:15.510427 | orchestrator | 2026-01-02 01:31:15 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:31:15.510540 | orchestrator | 2026-01-02 01:31:15 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:31:18.560482 | orchestrator | 2026-01-02 01:31:18 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:31:18.562122 | orchestrator | 2026-01-02 01:31:18 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:31:18.562153 | orchestrator | 2026-01-02 01:31:18 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:31:21.608286 | orchestrator | 2026-01-02 01:31:21 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:31:21.611214 | orchestrator | 2026-01-02 01:31:21 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:31:21.611283 | orchestrator | 2026-01-02 01:31:21 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:31:24.656734 | orchestrator | 2026-01-02 01:31:24 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:31:24.657920 | orchestrator | 2026-01-02 01:31:24 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:31:24.657953 | orchestrator | 2026-01-02 01:31:24 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:31:27.703371 | orchestrator | 2026-01-02 01:31:27 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:31:27.704792 | orchestrator | 2026-01-02 01:31:27 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:31:27.704840 | orchestrator | 2026-01-02 01:31:27 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:31:30.754310 | orchestrator | 2026-01-02 01:31:30 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:31:30.755064 | orchestrator | 2026-01-02 01:31:30 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:31:30.755174 | orchestrator | 2026-01-02 01:31:30 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:31:33.799777 | orchestrator | 2026-01-02 01:31:33 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:31:33.800714 | orchestrator | 2026-01-02 01:31:33 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:31:33.800820 | orchestrator | 2026-01-02 01:31:33 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:31:36.844619 | orchestrator | 2026-01-02 01:31:36 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:31:36.846336 | orchestrator | 2026-01-02 01:31:36 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:31:36.846380 | orchestrator | 2026-01-02 01:31:36 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:31:39.892243 | orchestrator | 2026-01-02 01:31:39 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:31:39.893805 | orchestrator | 2026-01-02 01:31:39 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:31:39.893832 | orchestrator | 2026-01-02 01:31:39 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:31:42.942007 | orchestrator | 2026-01-02 01:31:42 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:31:42.943368 | orchestrator | 2026-01-02 01:31:42 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:31:42.943410 | orchestrator | 2026-01-02 01:31:42 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:31:45.988588 | orchestrator | 2026-01-02 01:31:45 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:31:45.989912 | orchestrator | 2026-01-02 01:31:45 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:31:45.989938 | orchestrator | 2026-01-02 01:31:45 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:31:49.044355 | orchestrator | 2026-01-02 01:31:49 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:31:49.046194 | orchestrator | 2026-01-02 01:31:49 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:31:49.046261 | orchestrator | 2026-01-02 01:31:49 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:31:52.084815 | orchestrator | 2026-01-02 01:31:52 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:31:52.085252 | orchestrator | 2026-01-02 01:31:52 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:31:52.085334 | orchestrator | 2026-01-02 01:31:52 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:31:55.133291 | orchestrator | 2026-01-02 01:31:55 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:31:55.134255 | orchestrator | 2026-01-02 01:31:55 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:31:55.134395 | orchestrator | 2026-01-02 01:31:55 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:31:58.184939 | orchestrator | 2026-01-02 01:31:58 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:31:58.186950 | orchestrator | 2026-01-02 01:31:58 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:31:58.187017 | orchestrator | 2026-01-02 01:31:58 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:32:01.237335 | orchestrator | 2026-01-02 01:32:01 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:32:01.239215 | orchestrator | 2026-01-02 01:32:01 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:32:01.239263 | orchestrator | 2026-01-02 01:32:01 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:32:04.287421 | orchestrator | 2026-01-02 01:32:04 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:32:04.289288 | orchestrator | 2026-01-02 01:32:04 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:32:04.289313 | orchestrator | 2026-01-02 01:32:04 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:32:07.340431 | orchestrator | 2026-01-02 01:32:07 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:32:07.341988 | orchestrator | 2026-01-02 01:32:07 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:32:07.342209 | orchestrator | 2026-01-02 01:32:07 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:32:10.392963 | orchestrator | 2026-01-02 01:32:10 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:32:10.394512 | orchestrator | 2026-01-02 01:32:10 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:32:10.394627 | orchestrator | 2026-01-02 01:32:10 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:32:13.447331 | orchestrator | 2026-01-02 01:32:13 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:32:13.448992 | orchestrator | 2026-01-02 01:32:13 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:32:13.449082 | orchestrator | 2026-01-02 01:32:13 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:32:16.501422 | orchestrator | 2026-01-02 01:32:16 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:32:16.503398 | orchestrator | 2026-01-02 01:32:16 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:32:16.503534 | orchestrator | 2026-01-02 01:32:16 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:32:19.559777 | orchestrator | 2026-01-02 01:32:19 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:32:19.561684 | orchestrator | 2026-01-02 01:32:19 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:32:19.561762 | orchestrator | 2026-01-02 01:32:19 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:32:22.612817 | orchestrator | 2026-01-02 01:32:22 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:32:22.614437 | orchestrator | 2026-01-02 01:32:22 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:32:22.614484 | orchestrator | 2026-01-02 01:32:22 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:32:25.661342 | orchestrator | 2026-01-02 01:32:25 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:32:25.665312 | orchestrator | 2026-01-02 01:32:25 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:32:25.665370 | orchestrator | 2026-01-02 01:32:25 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:32:28.720747 | orchestrator | 2026-01-02 01:32:28 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:32:28.723402 | orchestrator | 2026-01-02 01:32:28 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:32:28.723466 | orchestrator | 2026-01-02 01:32:28 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:32:31.774717 | orchestrator | 2026-01-02 01:32:31 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:32:31.775870 | orchestrator | 2026-01-02 01:32:31 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:32:31.775934 | orchestrator | 2026-01-02 01:32:31 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:32:34.824746 | orchestrator | 2026-01-02 01:32:34 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:32:34.826374 | orchestrator | 2026-01-02 01:32:34 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:32:34.826427 | orchestrator | 2026-01-02 01:32:34 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:32:37.880988 | orchestrator | 2026-01-02 01:32:37 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:32:37.882203 | orchestrator | 2026-01-02 01:32:37 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:32:37.882249 | orchestrator | 2026-01-02 01:32:37 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:32:40.933729 | orchestrator | 2026-01-02 01:32:40 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:32:40.934146 | orchestrator | 2026-01-02 01:32:40 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:32:40.934187 | orchestrator | 2026-01-02 01:32:40 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:32:43.982296 | orchestrator | 2026-01-02 01:32:43 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:32:43.984436 | orchestrator | 2026-01-02 01:32:43 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:32:43.984546 | orchestrator | 2026-01-02 01:32:43 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:32:47.035528 | orchestrator | 2026-01-02 01:32:47 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:32:47.036610 | orchestrator | 2026-01-02 01:32:47 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:32:47.036808 | orchestrator | 2026-01-02 01:32:47 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:32:50.087885 | orchestrator | 2026-01-02 01:32:50 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:32:50.089979 | orchestrator | 2026-01-02 01:32:50 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:32:50.090110 | orchestrator | 2026-01-02 01:32:50 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:32:53.143871 | orchestrator | 2026-01-02 01:32:53 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:32:53.145408 | orchestrator | 2026-01-02 01:32:53 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:32:53.145467 | orchestrator | 2026-01-02 01:32:53 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:32:56.199356 | orchestrator | 2026-01-02 01:32:56 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:32:56.201871 | orchestrator | 2026-01-02 01:32:56 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:32:56.201950 | orchestrator | 2026-01-02 01:32:56 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:32:59.263783 | orchestrator | 2026-01-02 01:32:59 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:32:59.264240 | orchestrator | 2026-01-02 01:32:59 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:32:59.264280 | orchestrator | 2026-01-02 01:32:59 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:33:02.307501 | orchestrator | 2026-01-02 01:33:02 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:33:02.308825 | orchestrator | 2026-01-02 01:33:02 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:33:02.308867 | orchestrator | 2026-01-02 01:33:02 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:33:05.355575 | orchestrator | 2026-01-02 01:33:05 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:33:05.356455 | orchestrator | 2026-01-02 01:33:05 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:33:05.356491 | orchestrator | 2026-01-02 01:33:05 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:33:08.395717 | orchestrator | 2026-01-02 01:33:08 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:33:08.397245 | orchestrator | 2026-01-02 01:33:08 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:33:08.397337 | orchestrator | 2026-01-02 01:33:08 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:33:11.449333 | orchestrator | 2026-01-02 01:33:11 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:33:11.451876 | orchestrator | 2026-01-02 01:33:11 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:33:11.452009 | orchestrator | 2026-01-02 01:33:11 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:33:14.502467 | orchestrator | 2026-01-02 01:33:14 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:33:14.503780 | orchestrator | 2026-01-02 01:33:14 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:33:14.503840 | orchestrator | 2026-01-02 01:33:14 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:33:17.556163 | orchestrator | 2026-01-02 01:33:17 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:33:17.558261 | orchestrator | 2026-01-02 01:33:17 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:33:17.558345 | orchestrator | 2026-01-02 01:33:17 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:33:20.609706 | orchestrator | 2026-01-02 01:33:20 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:33:20.611330 | orchestrator | 2026-01-02 01:33:20 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:33:20.611423 | orchestrator | 2026-01-02 01:33:20 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:33:23.661449 | orchestrator | 2026-01-02 01:33:23 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:33:23.663768 | orchestrator | 2026-01-02 01:33:23 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:33:23.664195 | orchestrator | 2026-01-02 01:33:23 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:33:26.714909 | orchestrator | 2026-01-02 01:33:26 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:33:26.716958 | orchestrator | 2026-01-02 01:33:26 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:33:26.717045 | orchestrator | 2026-01-02 01:33:26 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:33:29.765063 | orchestrator | 2026-01-02 01:33:29 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:33:29.767189 | orchestrator | 2026-01-02 01:33:29 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:33:29.767234 | orchestrator | 2026-01-02 01:33:29 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:33:32.817213 | orchestrator | 2026-01-02 01:33:32 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:33:32.820444 | orchestrator | 2026-01-02 01:33:32 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:33:32.820541 | orchestrator | 2026-01-02 01:33:32 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:33:35.867597 | orchestrator | 2026-01-02 01:33:35 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:33:35.869031 | orchestrator | 2026-01-02 01:33:35 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:33:35.869074 | orchestrator | 2026-01-02 01:33:35 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:33:38.914728 | orchestrator | 2026-01-02 01:33:38 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:33:38.916567 | orchestrator | 2026-01-02 01:33:38 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:33:38.916666 | orchestrator | 2026-01-02 01:33:38 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:33:41.966853 | orchestrator | 2026-01-02 01:33:41 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:33:41.967079 | orchestrator | 2026-01-02 01:33:41 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:33:41.967234 | orchestrator | 2026-01-02 01:33:41 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:33:45.013598 | orchestrator | 2026-01-02 01:33:45 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:33:45.015271 | orchestrator | 2026-01-02 01:33:45 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:33:45.015812 | orchestrator | 2026-01-02 01:33:45 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:33:48.063131 | orchestrator | 2026-01-02 01:33:48 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:33:48.065880 | orchestrator | 2026-01-02 01:33:48 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:33:48.065937 | orchestrator | 2026-01-02 01:33:48 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:33:51.112757 | orchestrator | 2026-01-02 01:33:51 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:33:51.115248 | orchestrator | 2026-01-02 01:33:51 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:33:51.115420 | orchestrator | 2026-01-02 01:33:51 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:33:54.165834 | orchestrator | 2026-01-02 01:33:54 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:33:54.167740 | orchestrator | 2026-01-02 01:33:54 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:33:54.167833 | orchestrator | 2026-01-02 01:33:54 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:33:57.215775 | orchestrator | 2026-01-02 01:33:57 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:33:57.217321 | orchestrator | 2026-01-02 01:33:57 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:33:57.217422 | orchestrator | 2026-01-02 01:33:57 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:34:00.255824 | orchestrator | 2026-01-02 01:34:00 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:34:00.256748 | orchestrator | 2026-01-02 01:34:00 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:34:00.256787 | orchestrator | 2026-01-02 01:34:00 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:34:03.303100 | orchestrator | 2026-01-02 01:34:03 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:34:03.303886 | orchestrator | 2026-01-02 01:34:03 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:34:03.303921 | orchestrator | 2026-01-02 01:34:03 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:34:06.347984 | orchestrator | 2026-01-02 01:34:06 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:34:06.349652 | orchestrator | 2026-01-02 01:34:06 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:34:06.349801 | orchestrator | 2026-01-02 01:34:06 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:34:09.396401 | orchestrator | 2026-01-02 01:34:09 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:34:09.397510 | orchestrator | 2026-01-02 01:34:09 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:34:09.397748 | orchestrator | 2026-01-02 01:34:09 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:34:12.443195 | orchestrator | 2026-01-02 01:34:12 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:34:12.445049 | orchestrator | 2026-01-02 01:34:12 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:34:12.445164 | orchestrator | 2026-01-02 01:34:12 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:34:15.490678 | orchestrator | 2026-01-02 01:34:15 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:34:15.491355 | orchestrator | 2026-01-02 01:34:15 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:34:15.491390 | orchestrator | 2026-01-02 01:34:15 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:34:18.540515 | orchestrator | 2026-01-02 01:34:18 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:34:18.542548 | orchestrator | 2026-01-02 01:34:18 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:34:18.542848 | orchestrator | 2026-01-02 01:34:18 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:34:21.589932 | orchestrator | 2026-01-02 01:34:21 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:34:21.591994 | orchestrator | 2026-01-02 01:34:21 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:34:21.592116 | orchestrator | 2026-01-02 01:34:21 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:34:24.637343 | orchestrator | 2026-01-02 01:34:24 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:34:24.639192 | orchestrator | 2026-01-02 01:34:24 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:34:24.639290 | orchestrator | 2026-01-02 01:34:24 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:34:27.689030 | orchestrator | 2026-01-02 01:34:27 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:34:27.690376 | orchestrator | 2026-01-02 01:34:27 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:34:27.690750 | orchestrator | 2026-01-02 01:34:27 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:34:30.738658 | orchestrator | 2026-01-02 01:34:30 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:34:30.740259 | orchestrator | 2026-01-02 01:34:30 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:34:30.740292 | orchestrator | 2026-01-02 01:34:30 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:34:33.786894 | orchestrator | 2026-01-02 01:34:33 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:34:33.789569 | orchestrator | 2026-01-02 01:34:33 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:34:33.789600 | orchestrator | 2026-01-02 01:34:33 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:34:36.837689 | orchestrator | 2026-01-02 01:34:36 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:34:36.839200 | orchestrator | 2026-01-02 01:34:36 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:34:36.839265 | orchestrator | 2026-01-02 01:34:36 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:34:39.884773 | orchestrator | 2026-01-02 01:34:39 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:34:39.885914 | orchestrator | 2026-01-02 01:34:39 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:34:39.886228 | orchestrator | 2026-01-02 01:34:39 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:34:42.934395 | orchestrator | 2026-01-02 01:34:42 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:34:42.935974 | orchestrator | 2026-01-02 01:34:42 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:34:42.936076 | orchestrator | 2026-01-02 01:34:42 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:34:45.982462 | orchestrator | 2026-01-02 01:34:45 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:34:45.984069 | orchestrator | 2026-01-02 01:34:45 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:34:45.984106 | orchestrator | 2026-01-02 01:34:45 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:34:49.034558 | orchestrator | 2026-01-02 01:34:49 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:34:49.035944 | orchestrator | 2026-01-02 01:34:49 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:34:49.035980 | orchestrator | 2026-01-02 01:34:49 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:34:52.079975 | orchestrator | 2026-01-02 01:34:52 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:34:52.080749 | orchestrator | 2026-01-02 01:34:52 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:34:52.081016 | orchestrator | 2026-01-02 01:34:52 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:34:55.130395 | orchestrator | 2026-01-02 01:34:55 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:34:55.131982 | orchestrator | 2026-01-02 01:34:55 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:34:55.132162 | orchestrator | 2026-01-02 01:34:55 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:34:58.181337 | orchestrator | 2026-01-02 01:34:58 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:34:58.182555 | orchestrator | 2026-01-02 01:34:58 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:34:58.182591 | orchestrator | 2026-01-02 01:34:58 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:35:01.236073 | orchestrator | 2026-01-02 01:35:01 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:35:01.238278 | orchestrator | 2026-01-02 01:35:01 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:35:01.238337 | orchestrator | 2026-01-02 01:35:01 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:35:04.280581 | orchestrator | 2026-01-02 01:35:04 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:35:04.282962 | orchestrator | 2026-01-02 01:35:04 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:35:04.283129 | orchestrator | 2026-01-02 01:35:04 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:35:07.330468 | orchestrator | 2026-01-02 01:35:07 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:35:07.331355 | orchestrator | 2026-01-02 01:35:07 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:35:07.331408 | orchestrator | 2026-01-02 01:35:07 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:35:10.379720 | orchestrator | 2026-01-02 01:35:10 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:35:10.380408 | orchestrator | 2026-01-02 01:35:10 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:35:10.380456 | orchestrator | 2026-01-02 01:35:10 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:35:13.424851 | orchestrator | 2026-01-02 01:35:13 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:35:13.425783 | orchestrator | 2026-01-02 01:35:13 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:35:13.425809 | orchestrator | 2026-01-02 01:35:13 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:35:16.469256 | orchestrator | 2026-01-02 01:35:16 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:35:16.471172 | orchestrator | 2026-01-02 01:35:16 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:35:16.471219 | orchestrator | 2026-01-02 01:35:16 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:35:19.520971 | orchestrator | 2026-01-02 01:35:19 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:35:19.522332 | orchestrator | 2026-01-02 01:35:19 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:35:19.522385 | orchestrator | 2026-01-02 01:35:19 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:35:22.572294 | orchestrator | 2026-01-02 01:35:22 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:35:22.574146 | orchestrator | 2026-01-02 01:35:22 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:35:22.574209 | orchestrator | 2026-01-02 01:35:22 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:35:25.618258 | orchestrator | 2026-01-02 01:35:25 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:35:25.619917 | orchestrator | 2026-01-02 01:35:25 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:35:25.619970 | orchestrator | 2026-01-02 01:35:25 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:35:28.673077 | orchestrator | 2026-01-02 01:35:28 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:35:28.769341 | orchestrator | 2026-01-02 01:35:28 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:35:28.769400 | orchestrator | 2026-01-02 01:35:28 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:35:31.721564 | orchestrator | 2026-01-02 01:35:31 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:35:31.723134 | orchestrator | 2026-01-02 01:35:31 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:35:31.723226 | orchestrator | 2026-01-02 01:35:31 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:35:34.769916 | orchestrator | 2026-01-02 01:35:34 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:35:34.770952 | orchestrator | 2026-01-02 01:35:34 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:35:34.770999 | orchestrator | 2026-01-02 01:35:34 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:35:37.817520 | orchestrator | 2026-01-02 01:35:37 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:35:37.820074 | orchestrator | 2026-01-02 01:35:37 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:35:37.820108 | orchestrator | 2026-01-02 01:35:37 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:35:40.870743 | orchestrator | 2026-01-02 01:35:40 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:35:40.871522 | orchestrator | 2026-01-02 01:35:40 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:35:40.871583 | orchestrator | 2026-01-02 01:35:40 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:35:43.919927 | orchestrator | 2026-01-02 01:35:43 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:35:43.923024 | orchestrator | 2026-01-02 01:35:43 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:35:43.923065 | orchestrator | 2026-01-02 01:35:43 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:35:46.969538 | orchestrator | 2026-01-02 01:35:46 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:35:46.970814 | orchestrator | 2026-01-02 01:35:46 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:35:46.970855 | orchestrator | 2026-01-02 01:35:46 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:35:50.023233 | orchestrator | 2026-01-02 01:35:50 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:35:50.024836 | orchestrator | 2026-01-02 01:35:50 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:35:50.024886 | orchestrator | 2026-01-02 01:35:50 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:35:53.076103 | orchestrator | 2026-01-02 01:35:53 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:35:53.077420 | orchestrator | 2026-01-02 01:35:53 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:35:53.077455 | orchestrator | 2026-01-02 01:35:53 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:35:56.128095 | orchestrator | 2026-01-02 01:35:56 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:35:56.129235 | orchestrator | 2026-01-02 01:35:56 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:35:56.129259 | orchestrator | 2026-01-02 01:35:56 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:35:59.179654 | orchestrator | 2026-01-02 01:35:59 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:35:59.180522 | orchestrator | 2026-01-02 01:35:59 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:35:59.180566 | orchestrator | 2026-01-02 01:35:59 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:36:02.229166 | orchestrator | 2026-01-02 01:36:02 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:36:02.230557 | orchestrator | 2026-01-02 01:36:02 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:36:02.230652 | orchestrator | 2026-01-02 01:36:02 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:36:05.276040 | orchestrator | 2026-01-02 01:36:05 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:36:05.277540 | orchestrator | 2026-01-02 01:36:05 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:36:05.279213 | orchestrator | 2026-01-02 01:36:05 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:36:08.317741 | orchestrator | 2026-01-02 01:36:08 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:36:08.319418 | orchestrator | 2026-01-02 01:36:08 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:36:08.319642 | orchestrator | 2026-01-02 01:36:08 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:36:11.366259 | orchestrator | 2026-01-02 01:36:11 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:36:11.368556 | orchestrator | 2026-01-02 01:36:11 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:36:11.368710 | orchestrator | 2026-01-02 01:36:11 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:36:14.408869 | orchestrator | 2026-01-02 01:36:14 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:36:14.409650 | orchestrator | 2026-01-02 01:36:14 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:36:14.409682 | orchestrator | 2026-01-02 01:36:14 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:36:17.457728 | orchestrator | 2026-01-02 01:36:17 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:36:17.530277 | orchestrator | 2026-01-02 01:36:17 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:36:17.530332 | orchestrator | 2026-01-02 01:36:17 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:36:20.504350 | orchestrator | 2026-01-02 01:36:20 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:36:20.505069 | orchestrator | 2026-01-02 01:36:20 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:36:20.505107 | orchestrator | 2026-01-02 01:36:20 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:36:23.549764 | orchestrator | 2026-01-02 01:36:23 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:36:23.551190 | orchestrator | 2026-01-02 01:36:23 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:36:23.551225 | orchestrator | 2026-01-02 01:36:23 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:36:26.596447 | orchestrator | 2026-01-02 01:36:26 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:36:26.597914 | orchestrator | 2026-01-02 01:36:26 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:36:26.597964 | orchestrator | 2026-01-02 01:36:26 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:36:29.649855 | orchestrator | 2026-01-02 01:36:29 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:36:29.651442 | orchestrator | 2026-01-02 01:36:29 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:36:29.651665 | orchestrator | 2026-01-02 01:36:29 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:36:32.705700 | orchestrator | 2026-01-02 01:36:32 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:36:32.707360 | orchestrator | 2026-01-02 01:36:32 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:36:32.707397 | orchestrator | 2026-01-02 01:36:32 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:36:35.756092 | orchestrator | 2026-01-02 01:36:35 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:36:35.757985 | orchestrator | 2026-01-02 01:36:35 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:36:35.758131 | orchestrator | 2026-01-02 01:36:35 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:36:38.803361 | orchestrator | 2026-01-02 01:36:38 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:36:38.804801 | orchestrator | 2026-01-02 01:36:38 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:36:38.804963 | orchestrator | 2026-01-02 01:36:38 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:36:41.854629 | orchestrator | 2026-01-02 01:36:41 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:36:41.854740 | orchestrator | 2026-01-02 01:36:41 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:36:41.854762 | orchestrator | 2026-01-02 01:36:41 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:36:44.897553 | orchestrator | 2026-01-02 01:36:44 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:36:44.899307 | orchestrator | 2026-01-02 01:36:44 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:36:44.899377 | orchestrator | 2026-01-02 01:36:44 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:36:47.946750 | orchestrator | 2026-01-02 01:36:47 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:36:47.948154 | orchestrator | 2026-01-02 01:36:47 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:36:47.948190 | orchestrator | 2026-01-02 01:36:47 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:36:50.996894 | orchestrator | 2026-01-02 01:36:50 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:36:51.002333 | orchestrator | 2026-01-02 01:36:51 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:36:51.002431 | orchestrator | 2026-01-02 01:36:51 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:36:54.056557 | orchestrator | 2026-01-02 01:36:54 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:36:54.058289 | orchestrator | 2026-01-02 01:36:54 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:36:54.058340 | orchestrator | 2026-01-02 01:36:54 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:36:57.109476 | orchestrator | 2026-01-02 01:36:57 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:36:57.111571 | orchestrator | 2026-01-02 01:36:57 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:36:57.111758 | orchestrator | 2026-01-02 01:36:57 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:37:00.156402 | orchestrator | 2026-01-02 01:37:00 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:37:00.157814 | orchestrator | 2026-01-02 01:37:00 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:37:00.157856 | orchestrator | 2026-01-02 01:37:00 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:37:03.207368 | orchestrator | 2026-01-02 01:37:03 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:37:03.209373 | orchestrator | 2026-01-02 01:37:03 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:37:03.209569 | orchestrator | 2026-01-02 01:37:03 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:37:06.250446 | orchestrator | 2026-01-02 01:37:06 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:37:06.253440 | orchestrator | 2026-01-02 01:37:06 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:37:06.253482 | orchestrator | 2026-01-02 01:37:06 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:37:09.302838 | orchestrator | 2026-01-02 01:37:09 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:37:09.304246 | orchestrator | 2026-01-02 01:37:09 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:37:09.304474 | orchestrator | 2026-01-02 01:37:09 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:37:12.359186 | orchestrator | 2026-01-02 01:37:12 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:37:12.359292 | orchestrator | 2026-01-02 01:37:12 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:37:12.359308 | orchestrator | 2026-01-02 01:37:12 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:37:15.406421 | orchestrator | 2026-01-02 01:37:15 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:37:15.408467 | orchestrator | 2026-01-02 01:37:15 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:37:15.408552 | orchestrator | 2026-01-02 01:37:15 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:37:18.451309 | orchestrator | 2026-01-02 01:37:18 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:37:18.453200 | orchestrator | 2026-01-02 01:37:18 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:37:18.453244 | orchestrator | 2026-01-02 01:37:18 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:37:21.505122 | orchestrator | 2026-01-02 01:37:21 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:37:21.507108 | orchestrator | 2026-01-02 01:37:21 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:37:21.507136 | orchestrator | 2026-01-02 01:37:21 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:37:24.553774 | orchestrator | 2026-01-02 01:37:24 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:37:24.555583 | orchestrator | 2026-01-02 01:37:24 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:37:24.555748 | orchestrator | 2026-01-02 01:37:24 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:37:27.601208 | orchestrator | 2026-01-02 01:37:27 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:37:27.603361 | orchestrator | 2026-01-02 01:37:27 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:37:27.603396 | orchestrator | 2026-01-02 01:37:27 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:37:30.651334 | orchestrator | 2026-01-02 01:37:30 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:37:30.653690 | orchestrator | 2026-01-02 01:37:30 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:37:30.653778 | orchestrator | 2026-01-02 01:37:30 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:37:33.698391 | orchestrator | 2026-01-02 01:37:33 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:37:33.700998 | orchestrator | 2026-01-02 01:37:33 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:37:33.701041 | orchestrator | 2026-01-02 01:37:33 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:37:36.744975 | orchestrator | 2026-01-02 01:37:36 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:37:36.747034 | orchestrator | 2026-01-02 01:37:36 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:37:36.747394 | orchestrator | 2026-01-02 01:37:36 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:37:39.791031 | orchestrator | 2026-01-02 01:37:39 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:37:39.792822 | orchestrator | 2026-01-02 01:37:39 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:37:39.792973 | orchestrator | 2026-01-02 01:37:39 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:37:42.842196 | orchestrator | 2026-01-02 01:37:42 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:37:42.842424 | orchestrator | 2026-01-02 01:37:42 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:37:42.842609 | orchestrator | 2026-01-02 01:37:42 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:37:45.890179 | orchestrator | 2026-01-02 01:37:45 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:37:45.891847 | orchestrator | 2026-01-02 01:37:45 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:37:45.891901 | orchestrator | 2026-01-02 01:37:45 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:37:48.946381 | orchestrator | 2026-01-02 01:37:48 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:37:48.948202 | orchestrator | 2026-01-02 01:37:48 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:37:48.948309 | orchestrator | 2026-01-02 01:37:48 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:37:51.999381 | orchestrator | 2026-01-02 01:37:51 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:37:52.007577 | orchestrator | 2026-01-02 01:37:52 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:37:52.007825 | orchestrator | 2026-01-02 01:37:52 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:37:55.054330 | orchestrator | 2026-01-02 01:37:55 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:37:55.055728 | orchestrator | 2026-01-02 01:37:55 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:37:55.055811 | orchestrator | 2026-01-02 01:37:55 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:37:58.099889 | orchestrator | 2026-01-02 01:37:58 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:37:58.102838 | orchestrator | 2026-01-02 01:37:58 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:37:58.102922 | orchestrator | 2026-01-02 01:37:58 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:38:01.145046 | orchestrator | 2026-01-02 01:38:01 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:38:01.145148 | orchestrator | 2026-01-02 01:38:01 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:38:01.145165 | orchestrator | 2026-01-02 01:38:01 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:38:04.186387 | orchestrator | 2026-01-02 01:38:04 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:38:04.187816 | orchestrator | 2026-01-02 01:38:04 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:38:04.188032 | orchestrator | 2026-01-02 01:38:04 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:38:07.238233 | orchestrator | 2026-01-02 01:38:07 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:38:07.239709 | orchestrator | 2026-01-02 01:38:07 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:38:07.239728 | orchestrator | 2026-01-02 01:38:07 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:38:10.291007 | orchestrator | 2026-01-02 01:38:10 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:38:10.292519 | orchestrator | 2026-01-02 01:38:10 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:38:10.292668 | orchestrator | 2026-01-02 01:38:10 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:38:13.343765 | orchestrator | 2026-01-02 01:38:13 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:38:13.345286 | orchestrator | 2026-01-02 01:38:13 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:38:13.345326 | orchestrator | 2026-01-02 01:38:13 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:38:16.400796 | orchestrator | 2026-01-02 01:38:16 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:38:16.402311 | orchestrator | 2026-01-02 01:38:16 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:38:16.402366 | orchestrator | 2026-01-02 01:38:16 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:38:19.455290 | orchestrator | 2026-01-02 01:38:19 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:38:19.456053 | orchestrator | 2026-01-02 01:38:19 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:38:19.456094 | orchestrator | 2026-01-02 01:38:19 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:38:22.509150 | orchestrator | 2026-01-02 01:38:22 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:38:22.511209 | orchestrator | 2026-01-02 01:38:22 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:38:22.511289 | orchestrator | 2026-01-02 01:38:22 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:38:25.558793 | orchestrator | 2026-01-02 01:38:25 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:38:25.560663 | orchestrator | 2026-01-02 01:38:25 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:38:25.560792 | orchestrator | 2026-01-02 01:38:25 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:38:28.610884 | orchestrator | 2026-01-02 01:38:28 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:38:28.612412 | orchestrator | 2026-01-02 01:38:28 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:38:28.612450 | orchestrator | 2026-01-02 01:38:28 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:38:31.667020 | orchestrator | 2026-01-02 01:38:31 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:38:31.668599 | orchestrator | 2026-01-02 01:38:31 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:38:31.668856 | orchestrator | 2026-01-02 01:38:31 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:38:34.718502 | orchestrator | 2026-01-02 01:38:34 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:38:34.721362 | orchestrator | 2026-01-02 01:38:34 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:38:34.721439 | orchestrator | 2026-01-02 01:38:34 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:38:37.769154 | orchestrator | 2026-01-02 01:38:37 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:38:37.773200 | orchestrator | 2026-01-02 01:38:37 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:38:37.773291 | orchestrator | 2026-01-02 01:38:37 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:38:40.825183 | orchestrator | 2026-01-02 01:38:40 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:38:40.826529 | orchestrator | 2026-01-02 01:38:40 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:38:40.826582 | orchestrator | 2026-01-02 01:38:40 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:38:43.872081 | orchestrator | 2026-01-02 01:38:43 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:38:43.872448 | orchestrator | 2026-01-02 01:38:43 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:38:43.872476 | orchestrator | 2026-01-02 01:38:43 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:38:46.920782 | orchestrator | 2026-01-02 01:38:46 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:38:46.923495 | orchestrator | 2026-01-02 01:38:46 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:38:46.923551 | orchestrator | 2026-01-02 01:38:46 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:38:49.971628 | orchestrator | 2026-01-02 01:38:49 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:38:49.973074 | orchestrator | 2026-01-02 01:38:49 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:38:49.973168 | orchestrator | 2026-01-02 01:38:49 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:38:53.034739 | orchestrator | 2026-01-02 01:38:53 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:38:53.038254 | orchestrator | 2026-01-02 01:38:53 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:38:53.038311 | orchestrator | 2026-01-02 01:38:53 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:38:56.084556 | orchestrator | 2026-01-02 01:38:56 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:38:56.085869 | orchestrator | 2026-01-02 01:38:56 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:38:56.085894 | orchestrator | 2026-01-02 01:38:56 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:38:59.128704 | orchestrator | 2026-01-02 01:38:59 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:38:59.129774 | orchestrator | 2026-01-02 01:38:59 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:38:59.129809 | orchestrator | 2026-01-02 01:38:59 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:39:02.175842 | orchestrator | 2026-01-02 01:39:02 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:39:02.177783 | orchestrator | 2026-01-02 01:39:02 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:39:02.177911 | orchestrator | 2026-01-02 01:39:02 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:39:05.225443 | orchestrator | 2026-01-02 01:39:05 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:39:05.226351 | orchestrator | 2026-01-02 01:39:05 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:39:05.226382 | orchestrator | 2026-01-02 01:39:05 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:39:08.282835 | orchestrator | 2026-01-02 01:39:08 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:39:08.284805 | orchestrator | 2026-01-02 01:39:08 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:39:08.284873 | orchestrator | 2026-01-02 01:39:08 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:39:11.337454 | orchestrator | 2026-01-02 01:39:11 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:39:11.338869 | orchestrator | 2026-01-02 01:39:11 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:39:11.338902 | orchestrator | 2026-01-02 01:39:11 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:39:14.384685 | orchestrator | 2026-01-02 01:39:14 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:39:14.386276 | orchestrator | 2026-01-02 01:39:14 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:39:14.386325 | orchestrator | 2026-01-02 01:39:14 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:39:17.436339 | orchestrator | 2026-01-02 01:39:17 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:39:17.438130 | orchestrator | 2026-01-02 01:39:17 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:39:17.438224 | orchestrator | 2026-01-02 01:39:17 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:39:20.490308 | orchestrator | 2026-01-02 01:39:20 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:39:20.491955 | orchestrator | 2026-01-02 01:39:20 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:39:20.492002 | orchestrator | 2026-01-02 01:39:20 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:39:23.545351 | orchestrator | 2026-01-02 01:39:23 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:39:23.547029 | orchestrator | 2026-01-02 01:39:23 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:39:23.547100 | orchestrator | 2026-01-02 01:39:23 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:39:26.599826 | orchestrator | 2026-01-02 01:39:26 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:39:26.601031 | orchestrator | 2026-01-02 01:39:26 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:39:26.601067 | orchestrator | 2026-01-02 01:39:26 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:39:29.651609 | orchestrator | 2026-01-02 01:39:29 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:39:29.653219 | orchestrator | 2026-01-02 01:39:29 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:39:29.653266 | orchestrator | 2026-01-02 01:39:29 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:39:32.702188 | orchestrator | 2026-01-02 01:39:32 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:39:32.705439 | orchestrator | 2026-01-02 01:39:32 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:39:32.705508 | orchestrator | 2026-01-02 01:39:32 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:39:35.753274 | orchestrator | 2026-01-02 01:39:35 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:39:35.755961 | orchestrator | 2026-01-02 01:39:35 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:39:35.756150 | orchestrator | 2026-01-02 01:39:35 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:39:38.803355 | orchestrator | 2026-01-02 01:39:38 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:39:38.805879 | orchestrator | 2026-01-02 01:39:38 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:39:38.805930 | orchestrator | 2026-01-02 01:39:38 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:39:41.854952 | orchestrator | 2026-01-02 01:39:41 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:39:41.857385 | orchestrator | 2026-01-02 01:39:41 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:39:41.857414 | orchestrator | 2026-01-02 01:39:41 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:39:44.897510 | orchestrator | 2026-01-02 01:39:44 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:39:44.898390 | orchestrator | 2026-01-02 01:39:44 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:39:44.898666 | orchestrator | 2026-01-02 01:39:44 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:39:47.948108 | orchestrator | 2026-01-02 01:39:47 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:39:47.949475 | orchestrator | 2026-01-02 01:39:47 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:39:47.949507 | orchestrator | 2026-01-02 01:39:47 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:39:50.997046 | orchestrator | 2026-01-02 01:39:50 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:39:50.998830 | orchestrator | 2026-01-02 01:39:50 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:39:50.999275 | orchestrator | 2026-01-02 01:39:51 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:39:54.057195 | orchestrator | 2026-01-02 01:39:54 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:39:54.058129 | orchestrator | 2026-01-02 01:39:54 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:39:54.058218 | orchestrator | 2026-01-02 01:39:54 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:39:57.105044 | orchestrator | 2026-01-02 01:39:57 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:39:57.106475 | orchestrator | 2026-01-02 01:39:57 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:39:57.106546 | orchestrator | 2026-01-02 01:39:57 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:40:00.150869 | orchestrator | 2026-01-02 01:40:00 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:40:00.152724 | orchestrator | 2026-01-02 01:40:00 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:40:00.152755 | orchestrator | 2026-01-02 01:40:00 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:40:03.198213 | orchestrator | 2026-01-02 01:40:03 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:40:03.199731 | orchestrator | 2026-01-02 01:40:03 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:40:03.200015 | orchestrator | 2026-01-02 01:40:03 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:40:06.244429 | orchestrator | 2026-01-02 01:40:06 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:40:06.245568 | orchestrator | 2026-01-02 01:40:06 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:40:06.245932 | orchestrator | 2026-01-02 01:40:06 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:40:09.295431 | orchestrator | 2026-01-02 01:40:09 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:40:09.297707 | orchestrator | 2026-01-02 01:40:09 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:40:09.297720 | orchestrator | 2026-01-02 01:40:09 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:40:12.345948 | orchestrator | 2026-01-02 01:40:12 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:40:12.347522 | orchestrator | 2026-01-02 01:40:12 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:40:12.347568 | orchestrator | 2026-01-02 01:40:12 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:40:15.394613 | orchestrator | 2026-01-02 01:40:15 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:40:15.395751 | orchestrator | 2026-01-02 01:40:15 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:40:15.395960 | orchestrator | 2026-01-02 01:40:15 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:40:18.442311 | orchestrator | 2026-01-02 01:40:18 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:40:18.443936 | orchestrator | 2026-01-02 01:40:18 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:40:18.443978 | orchestrator | 2026-01-02 01:40:18 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:40:21.493191 | orchestrator | 2026-01-02 01:40:21 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:40:21.495152 | orchestrator | 2026-01-02 01:40:21 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:40:21.495348 | orchestrator | 2026-01-02 01:40:21 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:40:24.543936 | orchestrator | 2026-01-02 01:40:24 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:40:24.546187 | orchestrator | 2026-01-02 01:40:24 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:40:24.546312 | orchestrator | 2026-01-02 01:40:24 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:40:27.597799 | orchestrator | 2026-01-02 01:40:27 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:40:27.599016 | orchestrator | 2026-01-02 01:40:27 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:40:27.599082 | orchestrator | 2026-01-02 01:40:27 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:40:30.646615 | orchestrator | 2026-01-02 01:40:30 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:40:30.647979 | orchestrator | 2026-01-02 01:40:30 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:40:30.648449 | orchestrator | 2026-01-02 01:40:30 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:40:33.696401 | orchestrator | 2026-01-02 01:40:33 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:40:33.697809 | orchestrator | 2026-01-02 01:40:33 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:40:33.697845 | orchestrator | 2026-01-02 01:40:33 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:40:36.743053 | orchestrator | 2026-01-02 01:40:36 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:40:36.744599 | orchestrator | 2026-01-02 01:40:36 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:40:36.744651 | orchestrator | 2026-01-02 01:40:36 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:40:39.794738 | orchestrator | 2026-01-02 01:40:39 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:40:39.796667 | orchestrator | 2026-01-02 01:40:39 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:40:39.796848 | orchestrator | 2026-01-02 01:40:39 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:40:42.840537 | orchestrator | 2026-01-02 01:40:42 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:40:42.842979 | orchestrator | 2026-01-02 01:40:42 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:40:42.843024 | orchestrator | 2026-01-02 01:40:42 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:40:45.886237 | orchestrator | 2026-01-02 01:40:45 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:40:45.888465 | orchestrator | 2026-01-02 01:40:45 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:40:45.889135 | orchestrator | 2026-01-02 01:40:45 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:40:48.930180 | orchestrator | 2026-01-02 01:40:48 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:40:48.931759 | orchestrator | 2026-01-02 01:40:48 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:40:48.931834 | orchestrator | 2026-01-02 01:40:48 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:40:51.978544 | orchestrator | 2026-01-02 01:40:51 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:40:51.980648 | orchestrator | 2026-01-02 01:40:51 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:40:51.980732 | orchestrator | 2026-01-02 01:40:51 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:40:55.024204 | orchestrator | 2026-01-02 01:40:55 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:40:55.026232 | orchestrator | 2026-01-02 01:40:55 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:40:55.026267 | orchestrator | 2026-01-02 01:40:55 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:40:58.072122 | orchestrator | 2026-01-02 01:40:58 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:40:58.074384 | orchestrator | 2026-01-02 01:40:58 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:40:58.074689 | orchestrator | 2026-01-02 01:40:58 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:41:01.123757 | orchestrator | 2026-01-02 01:41:01 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:41:01.126157 | orchestrator | 2026-01-02 01:41:01 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:41:01.126199 | orchestrator | 2026-01-02 01:41:01 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:41:04.175392 | orchestrator | 2026-01-02 01:41:04 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:41:04.177416 | orchestrator | 2026-01-02 01:41:04 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:41:04.177452 | orchestrator | 2026-01-02 01:41:04 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:41:07.228467 | orchestrator | 2026-01-02 01:41:07 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:41:07.229968 | orchestrator | 2026-01-02 01:41:07 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:41:07.229991 | orchestrator | 2026-01-02 01:41:07 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:41:10.276460 | orchestrator | 2026-01-02 01:41:10 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:41:10.277585 | orchestrator | 2026-01-02 01:41:10 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:41:10.277707 | orchestrator | 2026-01-02 01:41:10 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:41:13.332851 | orchestrator | 2026-01-02 01:41:13 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:41:13.334945 | orchestrator | 2026-01-02 01:41:13 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:41:13.334963 | orchestrator | 2026-01-02 01:41:13 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:41:16.385536 | orchestrator | 2026-01-02 01:41:16 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:41:16.386848 | orchestrator | 2026-01-02 01:41:16 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:41:16.387088 | orchestrator | 2026-01-02 01:41:16 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:41:19.438503 | orchestrator | 2026-01-02 01:41:19 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:41:19.440637 | orchestrator | 2026-01-02 01:41:19 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:41:19.440947 | orchestrator | 2026-01-02 01:41:19 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:41:22.493030 | orchestrator | 2026-01-02 01:41:22 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:41:22.494915 | orchestrator | 2026-01-02 01:41:22 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:41:22.494954 | orchestrator | 2026-01-02 01:41:22 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:41:25.542134 | orchestrator | 2026-01-02 01:41:25 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:41:25.542390 | orchestrator | 2026-01-02 01:41:25 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:41:25.542531 | orchestrator | 2026-01-02 01:41:25 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:41:28.592713 | orchestrator | 2026-01-02 01:41:28 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:41:28.595281 | orchestrator | 2026-01-02 01:41:28 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:41:28.595459 | orchestrator | 2026-01-02 01:41:28 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:41:31.645555 | orchestrator | 2026-01-02 01:41:31 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:41:31.648477 | orchestrator | 2026-01-02 01:41:31 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:41:31.648515 | orchestrator | 2026-01-02 01:41:31 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:41:34.699721 | orchestrator | 2026-01-02 01:41:34 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:41:34.703464 | orchestrator | 2026-01-02 01:41:34 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:41:34.703614 | orchestrator | 2026-01-02 01:41:34 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:41:37.749917 | orchestrator | 2026-01-02 01:41:37 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:41:37.751661 | orchestrator | 2026-01-02 01:41:37 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:41:37.751737 | orchestrator | 2026-01-02 01:41:37 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:41:40.794576 | orchestrator | 2026-01-02 01:41:40 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:41:40.796617 | orchestrator | 2026-01-02 01:41:40 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:41:40.796659 | orchestrator | 2026-01-02 01:41:40 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:41:43.841605 | orchestrator | 2026-01-02 01:41:43 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:41:43.842767 | orchestrator | 2026-01-02 01:41:43 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:41:43.843059 | orchestrator | 2026-01-02 01:41:43 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:41:46.891435 | orchestrator | 2026-01-02 01:41:46 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:41:46.892300 | orchestrator | 2026-01-02 01:41:46 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:41:46.892497 | orchestrator | 2026-01-02 01:41:46 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:41:49.942503 | orchestrator | 2026-01-02 01:41:49 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:41:49.943410 | orchestrator | 2026-01-02 01:41:49 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:41:49.943849 | orchestrator | 2026-01-02 01:41:49 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:41:52.986004 | orchestrator | 2026-01-02 01:41:52 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:41:52.987867 | orchestrator | 2026-01-02 01:41:52 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:41:52.987929 | orchestrator | 2026-01-02 01:41:52 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:41:56.034170 | orchestrator | 2026-01-02 01:41:56 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:41:56.036229 | orchestrator | 2026-01-02 01:41:56 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:41:56.036315 | orchestrator | 2026-01-02 01:41:56 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:41:59.081235 | orchestrator | 2026-01-02 01:41:59 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:41:59.082402 | orchestrator | 2026-01-02 01:41:59 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:41:59.082513 | orchestrator | 2026-01-02 01:41:59 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:42:02.127246 | orchestrator | 2026-01-02 01:42:02 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:42:02.128363 | orchestrator | 2026-01-02 01:42:02 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:42:02.128460 | orchestrator | 2026-01-02 01:42:02 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:42:05.173965 | orchestrator | 2026-01-02 01:42:05 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:42:05.175853 | orchestrator | 2026-01-02 01:42:05 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:42:05.175970 | orchestrator | 2026-01-02 01:42:05 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:42:08.227357 | orchestrator | 2026-01-02 01:42:08 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:42:08.229318 | orchestrator | 2026-01-02 01:42:08 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:42:08.229399 | orchestrator | 2026-01-02 01:42:08 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:42:11.275351 | orchestrator | 2026-01-02 01:42:11 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:42:11.276462 | orchestrator | 2026-01-02 01:42:11 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:42:11.276547 | orchestrator | 2026-01-02 01:42:11 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:42:14.320269 | orchestrator | 2026-01-02 01:42:14 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:42:14.321807 | orchestrator | 2026-01-02 01:42:14 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:42:14.321861 | orchestrator | 2026-01-02 01:42:14 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:42:17.361800 | orchestrator | 2026-01-02 01:42:17 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:42:17.362762 | orchestrator | 2026-01-02 01:42:17 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:42:17.362842 | orchestrator | 2026-01-02 01:42:17 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:42:20.408197 | orchestrator | 2026-01-02 01:42:20 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:42:20.409669 | orchestrator | 2026-01-02 01:42:20 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:42:20.409727 | orchestrator | 2026-01-02 01:42:20 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:42:23.452984 | orchestrator | 2026-01-02 01:42:23 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:42:23.460599 | orchestrator | 2026-01-02 01:42:23 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:42:23.460654 | orchestrator | 2026-01-02 01:42:23 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:42:26.498314 | orchestrator | 2026-01-02 01:42:26 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:42:26.500388 | orchestrator | 2026-01-02 01:42:26 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:42:26.500819 | orchestrator | 2026-01-02 01:42:26 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:42:29.544191 | orchestrator | 2026-01-02 01:42:29 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:42:29.545846 | orchestrator | 2026-01-02 01:42:29 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:42:29.546095 | orchestrator | 2026-01-02 01:42:29 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:42:32.595810 | orchestrator | 2026-01-02 01:42:32 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:42:32.598119 | orchestrator | 2026-01-02 01:42:32 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:42:32.598159 | orchestrator | 2026-01-02 01:42:32 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:42:35.648661 | orchestrator | 2026-01-02 01:42:35 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:42:35.649834 | orchestrator | 2026-01-02 01:42:35 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:42:35.649879 | orchestrator | 2026-01-02 01:42:35 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:42:38.700620 | orchestrator | 2026-01-02 01:42:38 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:42:38.702381 | orchestrator | 2026-01-02 01:42:38 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:42:38.702470 | orchestrator | 2026-01-02 01:42:38 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:42:41.753205 | orchestrator | 2026-01-02 01:42:41 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:42:41.754145 | orchestrator | 2026-01-02 01:42:41 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:42:41.754389 | orchestrator | 2026-01-02 01:42:41 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:42:44.801183 | orchestrator | 2026-01-02 01:42:44 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:42:44.803830 | orchestrator | 2026-01-02 01:42:44 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:42:44.803880 | orchestrator | 2026-01-02 01:42:44 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:42:47.852837 | orchestrator | 2026-01-02 01:42:47 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:42:47.854359 | orchestrator | 2026-01-02 01:42:47 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:42:47.854390 | orchestrator | 2026-01-02 01:42:47 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:42:50.902571 | orchestrator | 2026-01-02 01:42:50 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:42:50.905052 | orchestrator | 2026-01-02 01:42:50 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:42:50.905118 | orchestrator | 2026-01-02 01:42:50 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:42:53.961452 | orchestrator | 2026-01-02 01:42:53 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:42:53.963269 | orchestrator | 2026-01-02 01:42:53 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:42:53.963316 | orchestrator | 2026-01-02 01:42:53 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:42:57.011444 | orchestrator | 2026-01-02 01:42:57 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:42:57.013146 | orchestrator | 2026-01-02 01:42:57 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:42:57.015009 | orchestrator | 2026-01-02 01:42:57 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:43:00.054928 | orchestrator | 2026-01-02 01:43:00 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:43:00.056887 | orchestrator | 2026-01-02 01:43:00 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:43:00.056923 | orchestrator | 2026-01-02 01:43:00 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:43:03.091504 | orchestrator | 2026-01-02 01:43:03 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:43:03.092679 | orchestrator | 2026-01-02 01:43:03 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:43:03.092748 | orchestrator | 2026-01-02 01:43:03 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:43:06.140502 | orchestrator | 2026-01-02 01:43:06 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:43:06.142925 | orchestrator | 2026-01-02 01:43:06 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:43:06.143151 | orchestrator | 2026-01-02 01:43:06 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:43:09.185782 | orchestrator | 2026-01-02 01:43:09 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:43:09.187927 | orchestrator | 2026-01-02 01:43:09 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:43:09.187974 | orchestrator | 2026-01-02 01:43:09 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:43:12.234461 | orchestrator | 2026-01-02 01:43:12 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:43:12.236788 | orchestrator | 2026-01-02 01:43:12 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:43:12.236862 | orchestrator | 2026-01-02 01:43:12 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:43:15.282111 | orchestrator | 2026-01-02 01:43:15 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:43:15.283062 | orchestrator | 2026-01-02 01:43:15 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:43:15.283093 | orchestrator | 2026-01-02 01:43:15 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:43:18.334122 | orchestrator | 2026-01-02 01:43:18 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:43:18.336165 | orchestrator | 2026-01-02 01:43:18 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:43:18.336235 | orchestrator | 2026-01-02 01:43:18 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:43:21.382486 | orchestrator | 2026-01-02 01:43:21 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:43:21.384210 | orchestrator | 2026-01-02 01:43:21 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:43:21.384263 | orchestrator | 2026-01-02 01:43:21 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:43:24.431066 | orchestrator | 2026-01-02 01:43:24 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:43:24.432069 | orchestrator | 2026-01-02 01:43:24 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:43:24.432097 | orchestrator | 2026-01-02 01:43:24 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:43:27.478139 | orchestrator | 2026-01-02 01:43:27 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:43:27.479653 | orchestrator | 2026-01-02 01:43:27 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:43:27.479842 | orchestrator | 2026-01-02 01:43:27 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:43:30.529163 | orchestrator | 2026-01-02 01:43:30 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:43:30.531359 | orchestrator | 2026-01-02 01:43:30 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:43:30.531454 | orchestrator | 2026-01-02 01:43:30 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:43:33.579200 | orchestrator | 2026-01-02 01:43:33 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:43:33.582232 | orchestrator | 2026-01-02 01:43:33 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:43:33.582440 | orchestrator | 2026-01-02 01:43:33 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:43:36.625634 | orchestrator | 2026-01-02 01:43:36 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:43:36.626057 | orchestrator | 2026-01-02 01:43:36 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:43:36.626262 | orchestrator | 2026-01-02 01:43:36 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:43:39.671206 | orchestrator | 2026-01-02 01:43:39 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:43:39.673465 | orchestrator | 2026-01-02 01:43:39 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:43:39.673643 | orchestrator | 2026-01-02 01:43:39 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:43:42.719433 | orchestrator | 2026-01-02 01:43:42 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:43:42.721333 | orchestrator | 2026-01-02 01:43:42 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:43:42.721407 | orchestrator | 2026-01-02 01:43:42 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:43:45.763342 | orchestrator | 2026-01-02 01:43:45 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:43:45.764685 | orchestrator | 2026-01-02 01:43:45 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:43:45.764805 | orchestrator | 2026-01-02 01:43:45 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:43:48.813998 | orchestrator | 2026-01-02 01:43:48 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:43:48.815467 | orchestrator | 2026-01-02 01:43:48 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:43:48.815521 | orchestrator | 2026-01-02 01:43:48 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:43:51.868524 | orchestrator | 2026-01-02 01:43:51 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:43:51.870253 | orchestrator | 2026-01-02 01:43:51 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:43:51.870500 | orchestrator | 2026-01-02 01:43:51 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:43:54.923239 | orchestrator | 2026-01-02 01:43:54 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:43:54.925481 | orchestrator | 2026-01-02 01:43:54 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:43:54.925580 | orchestrator | 2026-01-02 01:43:54 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:43:57.974888 | orchestrator | 2026-01-02 01:43:57 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:43:57.977170 | orchestrator | 2026-01-02 01:43:57 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:43:57.977190 | orchestrator | 2026-01-02 01:43:57 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:44:01.029115 | orchestrator | 2026-01-02 01:44:01 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:44:01.030869 | orchestrator | 2026-01-02 01:44:01 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:44:01.030957 | orchestrator | 2026-01-02 01:44:01 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:44:04.080241 | orchestrator | 2026-01-02 01:44:04 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:44:04.082130 | orchestrator | 2026-01-02 01:44:04 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:44:04.082188 | orchestrator | 2026-01-02 01:44:04 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:44:07.135208 | orchestrator | 2026-01-02 01:44:07 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:44:07.137882 | orchestrator | 2026-01-02 01:44:07 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:44:07.137921 | orchestrator | 2026-01-02 01:44:07 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:44:10.188534 | orchestrator | 2026-01-02 01:44:10 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:44:10.190118 | orchestrator | 2026-01-02 01:44:10 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:44:10.190154 | orchestrator | 2026-01-02 01:44:10 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:44:13.232157 | orchestrator | 2026-01-02 01:44:13 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:44:13.234131 | orchestrator | 2026-01-02 01:44:13 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:44:13.234206 | orchestrator | 2026-01-02 01:44:13 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:44:16.275198 | orchestrator | 2026-01-02 01:44:16 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:44:16.277167 | orchestrator | 2026-01-02 01:44:16 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:44:16.277274 | orchestrator | 2026-01-02 01:44:16 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:44:19.331436 | orchestrator | 2026-01-02 01:44:19 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:44:19.332929 | orchestrator | 2026-01-02 01:44:19 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:44:19.333037 | orchestrator | 2026-01-02 01:44:19 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:44:22.379164 | orchestrator | 2026-01-02 01:44:22 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:44:22.381271 | orchestrator | 2026-01-02 01:44:22 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:44:22.381399 | orchestrator | 2026-01-02 01:44:22 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:44:25.425217 | orchestrator | 2026-01-02 01:44:25 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:44:25.426913 | orchestrator | 2026-01-02 01:44:25 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:44:25.427039 | orchestrator | 2026-01-02 01:44:25 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:44:28.475343 | orchestrator | 2026-01-02 01:44:28 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:44:28.476988 | orchestrator | 2026-01-02 01:44:28 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:44:28.477033 | orchestrator | 2026-01-02 01:44:28 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:44:31.519796 | orchestrator | 2026-01-02 01:44:31 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:44:31.522256 | orchestrator | 2026-01-02 01:44:31 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:44:31.522454 | orchestrator | 2026-01-02 01:44:31 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:44:34.570469 | orchestrator | 2026-01-02 01:44:34 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:44:34.571356 | orchestrator | 2026-01-02 01:44:34 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:44:34.571415 | orchestrator | 2026-01-02 01:44:34 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:44:37.624172 | orchestrator | 2026-01-02 01:44:37 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:44:37.625914 | orchestrator | 2026-01-02 01:44:37 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:44:37.625954 | orchestrator | 2026-01-02 01:44:37 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:44:40.673505 | orchestrator | 2026-01-02 01:44:40 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:44:40.676297 | orchestrator | 2026-01-02 01:44:40 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:44:40.676349 | orchestrator | 2026-01-02 01:44:40 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:44:43.726103 | orchestrator | 2026-01-02 01:44:43 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:44:43.726488 | orchestrator | 2026-01-02 01:44:43 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:44:43.726541 | orchestrator | 2026-01-02 01:44:43 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:44:46.771120 | orchestrator | 2026-01-02 01:44:46 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:44:46.772540 | orchestrator | 2026-01-02 01:44:46 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:44:46.772587 | orchestrator | 2026-01-02 01:44:46 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:44:49.820353 | orchestrator | 2026-01-02 01:44:49 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:44:49.821427 | orchestrator | 2026-01-02 01:44:49 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:44:49.821461 | orchestrator | 2026-01-02 01:44:49 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:44:52.869779 | orchestrator | 2026-01-02 01:44:52 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:44:52.872790 | orchestrator | 2026-01-02 01:44:52 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:44:52.872941 | orchestrator | 2026-01-02 01:44:52 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:44:55.917587 | orchestrator | 2026-01-02 01:44:55 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:44:55.919855 | orchestrator | 2026-01-02 01:44:55 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:44:55.919920 | orchestrator | 2026-01-02 01:44:55 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:44:58.972115 | orchestrator | 2026-01-02 01:44:58 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:44:58.975068 | orchestrator | 2026-01-02 01:44:58 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:44:58.975450 | orchestrator | 2026-01-02 01:44:58 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:45:02.029448 | orchestrator | 2026-01-02 01:45:02 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:45:02.029545 | orchestrator | 2026-01-02 01:45:02 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:45:02.029606 | orchestrator | 2026-01-02 01:45:02 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:45:05.071542 | orchestrator | 2026-01-02 01:45:05 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:45:05.073258 | orchestrator | 2026-01-02 01:45:05 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:45:05.073345 | orchestrator | 2026-01-02 01:45:05 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:45:08.119879 | orchestrator | 2026-01-02 01:45:08 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:45:08.121585 | orchestrator | 2026-01-02 01:45:08 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:45:08.121610 | orchestrator | 2026-01-02 01:45:08 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:45:11.170906 | orchestrator | 2026-01-02 01:45:11 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:45:11.173054 | orchestrator | 2026-01-02 01:45:11 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:45:11.173092 | orchestrator | 2026-01-02 01:45:11 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:45:14.220831 | orchestrator | 2026-01-02 01:45:14 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:45:14.222451 | orchestrator | 2026-01-02 01:45:14 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:45:14.222487 | orchestrator | 2026-01-02 01:45:14 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:45:17.267322 | orchestrator | 2026-01-02 01:45:17 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:45:17.270238 | orchestrator | 2026-01-02 01:45:17 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:45:17.270430 | orchestrator | 2026-01-02 01:45:17 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:45:20.316391 | orchestrator | 2026-01-02 01:45:20 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:45:20.317532 | orchestrator | 2026-01-02 01:45:20 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:45:20.317564 | orchestrator | 2026-01-02 01:45:20 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:45:23.359974 | orchestrator | 2026-01-02 01:45:23 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:45:23.361935 | orchestrator | 2026-01-02 01:45:23 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:45:23.361969 | orchestrator | 2026-01-02 01:45:23 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:45:26.404113 | orchestrator | 2026-01-02 01:45:26 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:45:26.405628 | orchestrator | 2026-01-02 01:45:26 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:45:26.405683 | orchestrator | 2026-01-02 01:45:26 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:45:29.454076 | orchestrator | 2026-01-02 01:45:29 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:45:29.455470 | orchestrator | 2026-01-02 01:45:29 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:45:29.455527 | orchestrator | 2026-01-02 01:45:29 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:45:32.506375 | orchestrator | 2026-01-02 01:45:32 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:45:32.507960 | orchestrator | 2026-01-02 01:45:32 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:45:32.508080 | orchestrator | 2026-01-02 01:45:32 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:45:35.558775 | orchestrator | 2026-01-02 01:45:35 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:45:35.560320 | orchestrator | 2026-01-02 01:45:35 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:45:35.560365 | orchestrator | 2026-01-02 01:45:35 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:45:38.607975 | orchestrator | 2026-01-02 01:45:38 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:45:38.611990 | orchestrator | 2026-01-02 01:45:38 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:45:38.612127 | orchestrator | 2026-01-02 01:45:38 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:45:41.664632 | orchestrator | 2026-01-02 01:45:41 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:45:41.666531 | orchestrator | 2026-01-02 01:45:41 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:45:41.666573 | orchestrator | 2026-01-02 01:45:41 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:45:44.706194 | orchestrator | 2026-01-02 01:45:44 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:45:44.708779 | orchestrator | 2026-01-02 01:45:44 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:45:44.708818 | orchestrator | 2026-01-02 01:45:44 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:45:47.754636 | orchestrator | 2026-01-02 01:45:47 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:45:47.756083 | orchestrator | 2026-01-02 01:45:47 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:45:47.756289 | orchestrator | 2026-01-02 01:45:47 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:45:50.807959 | orchestrator | 2026-01-02 01:45:50 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:45:50.810241 | orchestrator | 2026-01-02 01:45:50 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:45:50.810292 | orchestrator | 2026-01-02 01:45:50 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:45:53.851707 | orchestrator | 2026-01-02 01:45:53 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:45:53.854359 | orchestrator | 2026-01-02 01:45:53 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:45:53.854452 | orchestrator | 2026-01-02 01:45:53 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:45:56.902120 | orchestrator | 2026-01-02 01:45:56 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:45:56.902301 | orchestrator | 2026-01-02 01:45:56 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:45:56.902341 | orchestrator | 2026-01-02 01:45:56 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:45:59.948158 | orchestrator | 2026-01-02 01:45:59 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:45:59.949833 | orchestrator | 2026-01-02 01:45:59 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:45:59.949865 | orchestrator | 2026-01-02 01:45:59 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:46:02.996187 | orchestrator | 2026-01-02 01:46:02 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:46:02.997936 | orchestrator | 2026-01-02 01:46:02 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:46:02.998401 | orchestrator | 2026-01-02 01:46:02 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:46:06.051472 | orchestrator | 2026-01-02 01:46:06 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:46:06.052533 | orchestrator | 2026-01-02 01:46:06 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:46:06.052566 | orchestrator | 2026-01-02 01:46:06 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:46:09.097480 | orchestrator | 2026-01-02 01:46:09 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:46:09.098959 | orchestrator | 2026-01-02 01:46:09 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:46:09.099004 | orchestrator | 2026-01-02 01:46:09 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:46:12.145382 | orchestrator | 2026-01-02 01:46:12 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:46:12.146355 | orchestrator | 2026-01-02 01:46:12 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:46:12.146486 | orchestrator | 2026-01-02 01:46:12 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:46:15.194177 | orchestrator | 2026-01-02 01:46:15 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:46:15.195972 | orchestrator | 2026-01-02 01:46:15 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:46:15.196337 | orchestrator | 2026-01-02 01:46:15 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:46:18.240261 | orchestrator | 2026-01-02 01:46:18 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:46:18.241887 | orchestrator | 2026-01-02 01:46:18 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:46:18.241930 | orchestrator | 2026-01-02 01:46:18 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:46:21.291188 | orchestrator | 2026-01-02 01:46:21 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:46:21.292353 | orchestrator | 2026-01-02 01:46:21 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:46:21.292391 | orchestrator | 2026-01-02 01:46:21 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:46:24.340794 | orchestrator | 2026-01-02 01:46:24 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:46:24.342753 | orchestrator | 2026-01-02 01:46:24 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:46:24.342836 | orchestrator | 2026-01-02 01:46:24 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:46:27.392309 | orchestrator | 2026-01-02 01:46:27 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:46:27.394188 | orchestrator | 2026-01-02 01:46:27 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:46:27.394283 | orchestrator | 2026-01-02 01:46:27 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:46:30.445258 | orchestrator | 2026-01-02 01:46:30 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:46:30.447206 | orchestrator | 2026-01-02 01:46:30 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:46:30.447268 | orchestrator | 2026-01-02 01:46:30 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:46:33.488840 | orchestrator | 2026-01-02 01:46:33 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:46:33.490594 | orchestrator | 2026-01-02 01:46:33 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:46:33.490699 | orchestrator | 2026-01-02 01:46:33 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:46:36.531082 | orchestrator | 2026-01-02 01:46:36 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:46:36.532603 | orchestrator | 2026-01-02 01:46:36 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:46:36.532636 | orchestrator | 2026-01-02 01:46:36 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:46:39.577624 | orchestrator | 2026-01-02 01:46:39 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:46:39.579972 | orchestrator | 2026-01-02 01:46:39 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:46:39.580024 | orchestrator | 2026-01-02 01:46:39 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:46:42.630774 | orchestrator | 2026-01-02 01:46:42 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:46:42.633036 | orchestrator | 2026-01-02 01:46:42 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:46:42.633403 | orchestrator | 2026-01-02 01:46:42 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:46:45.680696 | orchestrator | 2026-01-02 01:46:45 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:46:45.681674 | orchestrator | 2026-01-02 01:46:45 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:46:45.681715 | orchestrator | 2026-01-02 01:46:45 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:46:48.732834 | orchestrator | 2026-01-02 01:46:48 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:46:48.734462 | orchestrator | 2026-01-02 01:46:48 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:46:48.734510 | orchestrator | 2026-01-02 01:46:48 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:46:51.785104 | orchestrator | 2026-01-02 01:46:51 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:46:51.786798 | orchestrator | 2026-01-02 01:46:51 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:46:51.786841 | orchestrator | 2026-01-02 01:46:51 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:46:54.836392 | orchestrator | 2026-01-02 01:46:54 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:46:54.837803 | orchestrator | 2026-01-02 01:46:54 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:46:54.837840 | orchestrator | 2026-01-02 01:46:54 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:46:57.887102 | orchestrator | 2026-01-02 01:46:57 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:46:57.888396 | orchestrator | 2026-01-02 01:46:57 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:46:57.889018 | orchestrator | 2026-01-02 01:46:57 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:47:00.935683 | orchestrator | 2026-01-02 01:47:00 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:47:00.937016 | orchestrator | 2026-01-02 01:47:00 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:47:00.937557 | orchestrator | 2026-01-02 01:47:00 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:47:03.984901 | orchestrator | 2026-01-02 01:47:03 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:47:03.986956 | orchestrator | 2026-01-02 01:47:03 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:47:03.987006 | orchestrator | 2026-01-02 01:47:03 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:47:07.035081 | orchestrator | 2026-01-02 01:47:07 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:47:07.036831 | orchestrator | 2026-01-02 01:47:07 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:47:07.036960 | orchestrator | 2026-01-02 01:47:07 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:47:10.088147 | orchestrator | 2026-01-02 01:47:10 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:47:10.089949 | orchestrator | 2026-01-02 01:47:10 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:47:10.090000 | orchestrator | 2026-01-02 01:47:10 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:47:13.134564 | orchestrator | 2026-01-02 01:47:13 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:47:13.135787 | orchestrator | 2026-01-02 01:47:13 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:47:13.135920 | orchestrator | 2026-01-02 01:47:13 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:47:16.177030 | orchestrator | 2026-01-02 01:47:16 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:47:16.177383 | orchestrator | 2026-01-02 01:47:16 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:47:16.177415 | orchestrator | 2026-01-02 01:47:16 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:47:19.224649 | orchestrator | 2026-01-02 01:47:19 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:47:19.226343 | orchestrator | 2026-01-02 01:47:19 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:47:19.226387 | orchestrator | 2026-01-02 01:47:19 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:47:22.267444 | orchestrator | 2026-01-02 01:47:22 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:47:22.268950 | orchestrator | 2026-01-02 01:47:22 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:47:22.268982 | orchestrator | 2026-01-02 01:47:22 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:47:25.316377 | orchestrator | 2026-01-02 01:47:25 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:47:25.318353 | orchestrator | 2026-01-02 01:47:25 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:47:25.318374 | orchestrator | 2026-01-02 01:47:25 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:47:28.366424 | orchestrator | 2026-01-02 01:47:28 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:47:28.367841 | orchestrator | 2026-01-02 01:47:28 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:47:28.367938 | orchestrator | 2026-01-02 01:47:28 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:47:31.415648 | orchestrator | 2026-01-02 01:47:31 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:47:31.417052 | orchestrator | 2026-01-02 01:47:31 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:47:31.417191 | orchestrator | 2026-01-02 01:47:31 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:47:34.462876 | orchestrator | 2026-01-02 01:47:34 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:47:34.464406 | orchestrator | 2026-01-02 01:47:34 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:47:34.464449 | orchestrator | 2026-01-02 01:47:34 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:47:37.512525 | orchestrator | 2026-01-02 01:47:37 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:47:37.513981 | orchestrator | 2026-01-02 01:47:37 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:47:37.514222 | orchestrator | 2026-01-02 01:47:37 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:47:40.558218 | orchestrator | 2026-01-02 01:47:40 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:47:40.559802 | orchestrator | 2026-01-02 01:47:40 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:47:40.559840 | orchestrator | 2026-01-02 01:47:40 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:47:43.603183 | orchestrator | 2026-01-02 01:47:43 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:47:43.603613 | orchestrator | 2026-01-02 01:47:43 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:47:43.603645 | orchestrator | 2026-01-02 01:47:43 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:47:46.653934 | orchestrator | 2026-01-02 01:47:46 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:47:46.654902 | orchestrator | 2026-01-02 01:47:46 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:47:46.655047 | orchestrator | 2026-01-02 01:47:46 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:47:49.699178 | orchestrator | 2026-01-02 01:47:49 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:47:49.701049 | orchestrator | 2026-01-02 01:47:49 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:47:49.701078 | orchestrator | 2026-01-02 01:47:49 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:47:52.745048 | orchestrator | 2026-01-02 01:47:52 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:47:52.746366 | orchestrator | 2026-01-02 01:47:52 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:47:52.746400 | orchestrator | 2026-01-02 01:47:52 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:47:55.793445 | orchestrator | 2026-01-02 01:47:55 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:47:55.795265 | orchestrator | 2026-01-02 01:47:55 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:47:55.795567 | orchestrator | 2026-01-02 01:47:55 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:47:58.846007 | orchestrator | 2026-01-02 01:47:58 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:47:58.847862 | orchestrator | 2026-01-02 01:47:58 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:47:58.847942 | orchestrator | 2026-01-02 01:47:58 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:48:01.894410 | orchestrator | 2026-01-02 01:48:01 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:48:01.896171 | orchestrator | 2026-01-02 01:48:01 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:48:01.896485 | orchestrator | 2026-01-02 01:48:01 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:48:04.939255 | orchestrator | 2026-01-02 01:48:04 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:48:04.941018 | orchestrator | 2026-01-02 01:48:04 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:48:04.941058 | orchestrator | 2026-01-02 01:48:04 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:48:07.986615 | orchestrator | 2026-01-02 01:48:07 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:48:07.988182 | orchestrator | 2026-01-02 01:48:07 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:48:07.988229 | orchestrator | 2026-01-02 01:48:07 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:48:11.033039 | orchestrator | 2026-01-02 01:48:11 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:48:11.036534 | orchestrator | 2026-01-02 01:48:11 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:48:11.036583 | orchestrator | 2026-01-02 01:48:11 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:48:14.085026 | orchestrator | 2026-01-02 01:48:14 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:48:14.086914 | orchestrator | 2026-01-02 01:48:14 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:48:14.086980 | orchestrator | 2026-01-02 01:48:14 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:48:17.130182 | orchestrator | 2026-01-02 01:48:17 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:48:17.133236 | orchestrator | 2026-01-02 01:48:17 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:48:17.133288 | orchestrator | 2026-01-02 01:48:17 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:48:20.179864 | orchestrator | 2026-01-02 01:48:20 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:48:20.181664 | orchestrator | 2026-01-02 01:48:20 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:48:20.181699 | orchestrator | 2026-01-02 01:48:20 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:48:23.228526 | orchestrator | 2026-01-02 01:48:23 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:48:23.229677 | orchestrator | 2026-01-02 01:48:23 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:48:23.229716 | orchestrator | 2026-01-02 01:48:23 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:48:26.271662 | orchestrator | 2026-01-02 01:48:26 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:48:26.273138 | orchestrator | 2026-01-02 01:48:26 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:48:26.273272 | orchestrator | 2026-01-02 01:48:26 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:48:29.316487 | orchestrator | 2026-01-02 01:48:29 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:48:29.317011 | orchestrator | 2026-01-02 01:48:29 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:48:29.317078 | orchestrator | 2026-01-02 01:48:29 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:48:32.366973 | orchestrator | 2026-01-02 01:48:32 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:48:32.369273 | orchestrator | 2026-01-02 01:48:32 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:48:32.369630 | orchestrator | 2026-01-02 01:48:32 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:48:35.419033 | orchestrator | 2026-01-02 01:48:35 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:48:35.421080 | orchestrator | 2026-01-02 01:48:35 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:48:35.421116 | orchestrator | 2026-01-02 01:48:35 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:48:38.469599 | orchestrator | 2026-01-02 01:48:38 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:48:38.470460 | orchestrator | 2026-01-02 01:48:38 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:48:38.470493 | orchestrator | 2026-01-02 01:48:38 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:48:41.513370 | orchestrator | 2026-01-02 01:48:41 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:48:41.515354 | orchestrator | 2026-01-02 01:48:41 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:48:41.515511 | orchestrator | 2026-01-02 01:48:41 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:48:44.558522 | orchestrator | 2026-01-02 01:48:44 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:48:44.559301 | orchestrator | 2026-01-02 01:48:44 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:48:44.559330 | orchestrator | 2026-01-02 01:48:44 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:48:47.594910 | orchestrator | 2026-01-02 01:48:47 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:48:47.596347 | orchestrator | 2026-01-02 01:48:47 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:48:47.596464 | orchestrator | 2026-01-02 01:48:47 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:48:50.644059 | orchestrator | 2026-01-02 01:48:50 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:48:50.645262 | orchestrator | 2026-01-02 01:48:50 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:48:50.645296 | orchestrator | 2026-01-02 01:48:50 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:48:53.695496 | orchestrator | 2026-01-02 01:48:53 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:48:53.696649 | orchestrator | 2026-01-02 01:48:53 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:48:53.696742 | orchestrator | 2026-01-02 01:48:53 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:48:56.738924 | orchestrator | 2026-01-02 01:48:56 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:48:56.740278 | orchestrator | 2026-01-02 01:48:56 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:48:56.740314 | orchestrator | 2026-01-02 01:48:56 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:48:59.783044 | orchestrator | 2026-01-02 01:48:59 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:48:59.784661 | orchestrator | 2026-01-02 01:48:59 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:48:59.784757 | orchestrator | 2026-01-02 01:48:59 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:49:02.828674 | orchestrator | 2026-01-02 01:49:02 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:49:02.831006 | orchestrator | 2026-01-02 01:49:02 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:49:02.831061 | orchestrator | 2026-01-02 01:49:02 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:49:05.870345 | orchestrator | 2026-01-02 01:49:05 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:49:05.872211 | orchestrator | 2026-01-02 01:49:05 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:49:05.872244 | orchestrator | 2026-01-02 01:49:05 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:49:08.918650 | orchestrator | 2026-01-02 01:49:08 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:49:08.920196 | orchestrator | 2026-01-02 01:49:08 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:49:08.920346 | orchestrator | 2026-01-02 01:49:08 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:49:11.971360 | orchestrator | 2026-01-02 01:49:11 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:49:11.975102 | orchestrator | 2026-01-02 01:49:11 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:49:11.975715 | orchestrator | 2026-01-02 01:49:11 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:49:15.025246 | orchestrator | 2026-01-02 01:49:15 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:49:15.026771 | orchestrator | 2026-01-02 01:49:15 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:49:15.026845 | orchestrator | 2026-01-02 01:49:15 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:49:18.070001 | orchestrator | 2026-01-02 01:49:18 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:49:18.071959 | orchestrator | 2026-01-02 01:49:18 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:49:18.072090 | orchestrator | 2026-01-02 01:49:18 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:49:21.116874 | orchestrator | 2026-01-02 01:49:21 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:49:21.118093 | orchestrator | 2026-01-02 01:49:21 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:49:21.118183 | orchestrator | 2026-01-02 01:49:21 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:49:24.167192 | orchestrator | 2026-01-02 01:49:24 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:49:24.169334 | orchestrator | 2026-01-02 01:49:24 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:49:24.169586 | orchestrator | 2026-01-02 01:49:24 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:49:27.218353 | orchestrator | 2026-01-02 01:49:27 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:49:27.220687 | orchestrator | 2026-01-02 01:49:27 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:49:27.220742 | orchestrator | 2026-01-02 01:49:27 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:49:30.266313 | orchestrator | 2026-01-02 01:49:30 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:49:30.268298 | orchestrator | 2026-01-02 01:49:30 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:49:30.268410 | orchestrator | 2026-01-02 01:49:30 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:49:33.316857 | orchestrator | 2026-01-02 01:49:33 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:49:33.318264 | orchestrator | 2026-01-02 01:49:33 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:49:33.318312 | orchestrator | 2026-01-02 01:49:33 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:49:36.363430 | orchestrator | 2026-01-02 01:49:36 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:49:36.365257 | orchestrator | 2026-01-02 01:49:36 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:49:36.365292 | orchestrator | 2026-01-02 01:49:36 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:49:39.409589 | orchestrator | 2026-01-02 01:49:39 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:49:39.411339 | orchestrator | 2026-01-02 01:49:39 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:49:39.411405 | orchestrator | 2026-01-02 01:49:39 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:49:42.459048 | orchestrator | 2026-01-02 01:49:42 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:49:42.460617 | orchestrator | 2026-01-02 01:49:42 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:49:42.460651 | orchestrator | 2026-01-02 01:49:42 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:49:45.504887 | orchestrator | 2026-01-02 01:49:45 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:49:45.505511 | orchestrator | 2026-01-02 01:49:45 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:49:45.505556 | orchestrator | 2026-01-02 01:49:45 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:49:48.551248 | orchestrator | 2026-01-02 01:49:48 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:49:48.552171 | orchestrator | 2026-01-02 01:49:48 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:49:48.552268 | orchestrator | 2026-01-02 01:49:48 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:49:51.598095 | orchestrator | 2026-01-02 01:49:51 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:49:51.599737 | orchestrator | 2026-01-02 01:49:51 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:49:51.600075 | orchestrator | 2026-01-02 01:49:51 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:49:54.644525 | orchestrator | 2026-01-02 01:49:54 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:49:54.645971 | orchestrator | 2026-01-02 01:49:54 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:49:54.646001 | orchestrator | 2026-01-02 01:49:54 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:49:57.692104 | orchestrator | 2026-01-02 01:49:57 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:49:57.693329 | orchestrator | 2026-01-02 01:49:57 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:49:57.693409 | orchestrator | 2026-01-02 01:49:57 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:50:00.732518 | orchestrator | 2026-01-02 01:50:00 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:50:00.734206 | orchestrator | 2026-01-02 01:50:00 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:50:00.734232 | orchestrator | 2026-01-02 01:50:00 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:50:03.778445 | orchestrator | 2026-01-02 01:50:03 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:50:03.781049 | orchestrator | 2026-01-02 01:50:03 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:50:03.781096 | orchestrator | 2026-01-02 01:50:03 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:50:06.828905 | orchestrator | 2026-01-02 01:50:06 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:50:06.830603 | orchestrator | 2026-01-02 01:50:06 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:50:06.830639 | orchestrator | 2026-01-02 01:50:06 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:50:09.876028 | orchestrator | 2026-01-02 01:50:09 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:50:09.877939 | orchestrator | 2026-01-02 01:50:09 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:50:09.878191 | orchestrator | 2026-01-02 01:50:09 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:50:12.932271 | orchestrator | 2026-01-02 01:50:12 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:50:12.933880 | orchestrator | 2026-01-02 01:50:12 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:50:12.933916 | orchestrator | 2026-01-02 01:50:12 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:50:15.980164 | orchestrator | 2026-01-02 01:50:15 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:50:15.981448 | orchestrator | 2026-01-02 01:50:15 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:50:15.981773 | orchestrator | 2026-01-02 01:50:15 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:50:19.033788 | orchestrator | 2026-01-02 01:50:19 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:50:19.036175 | orchestrator | 2026-01-02 01:50:19 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:50:19.036244 | orchestrator | 2026-01-02 01:50:19 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:50:22.075308 | orchestrator | 2026-01-02 01:50:22 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:50:22.076260 | orchestrator | 2026-01-02 01:50:22 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:50:22.076339 | orchestrator | 2026-01-02 01:50:22 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:50:25.122719 | orchestrator | 2026-01-02 01:50:25 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:50:25.124445 | orchestrator | 2026-01-02 01:50:25 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:50:25.124473 | orchestrator | 2026-01-02 01:50:25 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:50:28.174441 | orchestrator | 2026-01-02 01:50:28 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:50:28.176943 | orchestrator | 2026-01-02 01:50:28 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:50:28.176978 | orchestrator | 2026-01-02 01:50:28 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:50:31.224127 | orchestrator | 2026-01-02 01:50:31 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:50:31.225469 | orchestrator | 2026-01-02 01:50:31 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:50:31.225499 | orchestrator | 2026-01-02 01:50:31 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:50:34.272486 | orchestrator | 2026-01-02 01:50:34 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:50:34.274804 | orchestrator | 2026-01-02 01:50:34 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:50:34.274866 | orchestrator | 2026-01-02 01:50:34 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:50:37.326581 | orchestrator | 2026-01-02 01:50:37 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:50:37.327998 | orchestrator | 2026-01-02 01:50:37 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:50:37.328033 | orchestrator | 2026-01-02 01:50:37 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:50:40.372945 | orchestrator | 2026-01-02 01:50:40 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:50:40.374364 | orchestrator | 2026-01-02 01:50:40 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:50:40.374503 | orchestrator | 2026-01-02 01:50:40 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:50:43.424710 | orchestrator | 2026-01-02 01:50:43 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:50:43.426121 | orchestrator | 2026-01-02 01:50:43 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:50:43.426169 | orchestrator | 2026-01-02 01:50:43 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:50:46.468613 | orchestrator | 2026-01-02 01:50:46 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:50:46.469995 | orchestrator | 2026-01-02 01:50:46 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:50:46.470090 | orchestrator | 2026-01-02 01:50:46 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:50:49.513736 | orchestrator | 2026-01-02 01:50:49 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:50:49.515419 | orchestrator | 2026-01-02 01:50:49 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:50:49.515659 | orchestrator | 2026-01-02 01:50:49 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:50:52.550191 | orchestrator | 2026-01-02 01:50:52 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:50:52.552701 | orchestrator | 2026-01-02 01:50:52 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:50:52.552737 | orchestrator | 2026-01-02 01:50:52 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:50:55.599065 | orchestrator | 2026-01-02 01:50:55 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:50:55.599793 | orchestrator | 2026-01-02 01:50:55 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:50:55.600148 | orchestrator | 2026-01-02 01:50:55 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:50:58.646421 | orchestrator | 2026-01-02 01:50:58 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:50:58.648161 | orchestrator | 2026-01-02 01:50:58 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:50:58.648211 | orchestrator | 2026-01-02 01:50:58 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:51:01.689289 | orchestrator | 2026-01-02 01:51:01 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:51:01.690405 | orchestrator | 2026-01-02 01:51:01 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:51:01.690442 | orchestrator | 2026-01-02 01:51:01 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:51:04.735324 | orchestrator | 2026-01-02 01:51:04 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:51:04.737711 | orchestrator | 2026-01-02 01:51:04 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:51:04.737957 | orchestrator | 2026-01-02 01:51:04 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:51:07.787464 | orchestrator | 2026-01-02 01:51:07 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:51:07.789123 | orchestrator | 2026-01-02 01:51:07 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:51:07.789150 | orchestrator | 2026-01-02 01:51:07 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:51:10.839482 | orchestrator | 2026-01-02 01:51:10 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:51:10.841770 | orchestrator | 2026-01-02 01:51:10 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:51:10.841985 | orchestrator | 2026-01-02 01:51:10 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:51:13.892179 | orchestrator | 2026-01-02 01:51:13 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:51:13.894578 | orchestrator | 2026-01-02 01:51:13 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:51:13.894728 | orchestrator | 2026-01-02 01:51:13 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:51:16.944553 | orchestrator | 2026-01-02 01:51:16 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:51:16.946259 | orchestrator | 2026-01-02 01:51:16 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:51:16.946344 | orchestrator | 2026-01-02 01:51:16 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:51:19.997020 | orchestrator | 2026-01-02 01:51:19 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:51:19.999519 | orchestrator | 2026-01-02 01:51:20 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:51:19.999630 | orchestrator | 2026-01-02 01:51:20 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:51:23.054907 | orchestrator | 2026-01-02 01:51:23 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:51:23.057119 | orchestrator | 2026-01-02 01:51:23 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:51:23.057285 | orchestrator | 2026-01-02 01:51:23 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:51:26.104300 | orchestrator | 2026-01-02 01:51:26 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:51:26.105887 | orchestrator | 2026-01-02 01:51:26 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:51:26.106163 | orchestrator | 2026-01-02 01:51:26 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:51:29.151919 | orchestrator | 2026-01-02 01:51:29 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:51:29.153845 | orchestrator | 2026-01-02 01:51:29 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:51:29.153906 | orchestrator | 2026-01-02 01:51:29 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:51:32.206753 | orchestrator | 2026-01-02 01:51:32 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:51:32.208679 | orchestrator | 2026-01-02 01:51:32 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:51:32.208918 | orchestrator | 2026-01-02 01:51:32 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:51:35.248537 | orchestrator | 2026-01-02 01:51:35 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:51:35.250975 | orchestrator | 2026-01-02 01:51:35 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:51:35.251120 | orchestrator | 2026-01-02 01:51:35 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:51:38.299468 | orchestrator | 2026-01-02 01:51:38 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:51:38.301261 | orchestrator | 2026-01-02 01:51:38 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:51:38.301285 | orchestrator | 2026-01-02 01:51:38 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:51:41.352121 | orchestrator | 2026-01-02 01:51:41 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:51:41.354339 | orchestrator | 2026-01-02 01:51:41 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:51:41.354419 | orchestrator | 2026-01-02 01:51:41 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:51:44.398503 | orchestrator | 2026-01-02 01:51:44 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:51:44.401005 | orchestrator | 2026-01-02 01:51:44 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:51:44.401137 | orchestrator | 2026-01-02 01:51:44 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:51:47.453327 | orchestrator | 2026-01-02 01:51:47 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:51:47.455525 | orchestrator | 2026-01-02 01:51:47 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:51:47.455633 | orchestrator | 2026-01-02 01:51:47 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:51:50.494702 | orchestrator | 2026-01-02 01:51:50 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:51:50.495264 | orchestrator | 2026-01-02 01:51:50 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:51:50.495294 | orchestrator | 2026-01-02 01:51:50 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:51:53.539366 | orchestrator | 2026-01-02 01:51:53 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:51:53.540942 | orchestrator | 2026-01-02 01:51:53 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:51:53.540975 | orchestrator | 2026-01-02 01:51:53 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:51:56.584584 | orchestrator | 2026-01-02 01:51:56 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:51:56.586437 | orchestrator | 2026-01-02 01:51:56 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:51:56.586496 | orchestrator | 2026-01-02 01:51:56 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:51:59.631212 | orchestrator | 2026-01-02 01:51:59 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:51:59.633500 | orchestrator | 2026-01-02 01:51:59 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:51:59.633534 | orchestrator | 2026-01-02 01:51:59 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:52:02.681294 | orchestrator | 2026-01-02 01:52:02 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:52:02.682872 | orchestrator | 2026-01-02 01:52:02 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:52:02.682925 | orchestrator | 2026-01-02 01:52:02 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:52:05.725535 | orchestrator | 2026-01-02 01:52:05 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:52:05.727405 | orchestrator | 2026-01-02 01:52:05 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:52:05.727506 | orchestrator | 2026-01-02 01:52:05 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:52:08.772814 | orchestrator | 2026-01-02 01:52:08 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:52:08.774375 | orchestrator | 2026-01-02 01:52:08 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:52:08.774464 | orchestrator | 2026-01-02 01:52:08 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:52:11.818216 | orchestrator | 2026-01-02 01:52:11 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:52:11.819418 | orchestrator | 2026-01-02 01:52:11 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:52:11.819476 | orchestrator | 2026-01-02 01:52:11 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:52:14.867369 | orchestrator | 2026-01-02 01:52:14 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:52:14.870439 | orchestrator | 2026-01-02 01:52:14 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:52:14.870504 | orchestrator | 2026-01-02 01:52:14 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:52:17.914619 | orchestrator | 2026-01-02 01:52:17 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:52:17.916100 | orchestrator | 2026-01-02 01:52:17 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:52:17.916265 | orchestrator | 2026-01-02 01:52:17 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:52:20.961959 | orchestrator | 2026-01-02 01:52:20 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:52:20.963905 | orchestrator | 2026-01-02 01:52:20 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:52:20.964084 | orchestrator | 2026-01-02 01:52:20 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:52:24.009149 | orchestrator | 2026-01-02 01:52:24 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:52:24.010510 | orchestrator | 2026-01-02 01:52:24 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:52:24.010539 | orchestrator | 2026-01-02 01:52:24 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:52:27.057107 | orchestrator | 2026-01-02 01:52:27 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:52:27.057517 | orchestrator | 2026-01-02 01:52:27 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:52:27.057545 | orchestrator | 2026-01-02 01:52:27 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:52:30.101008 | orchestrator | 2026-01-02 01:52:30 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:52:30.103534 | orchestrator | 2026-01-02 01:52:30 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:52:30.103617 | orchestrator | 2026-01-02 01:52:30 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:52:33.137373 | orchestrator | 2026-01-02 01:52:33 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:52:33.138351 | orchestrator | 2026-01-02 01:52:33 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:52:33.138387 | orchestrator | 2026-01-02 01:52:33 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:52:36.184577 | orchestrator | 2026-01-02 01:52:36 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:52:36.186918 | orchestrator | 2026-01-02 01:52:36 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:52:36.186981 | orchestrator | 2026-01-02 01:52:36 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:52:39.234131 | orchestrator | 2026-01-02 01:52:39 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:52:39.237760 | orchestrator | 2026-01-02 01:52:39 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:52:39.237873 | orchestrator | 2026-01-02 01:52:39 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:52:42.281778 | orchestrator | 2026-01-02 01:52:42 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:52:42.285167 | orchestrator | 2026-01-02 01:52:42 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:52:42.285233 | orchestrator | 2026-01-02 01:52:42 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:52:45.329068 | orchestrator | 2026-01-02 01:52:45 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:52:45.330109 | orchestrator | 2026-01-02 01:52:45 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:52:45.330206 | orchestrator | 2026-01-02 01:52:45 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:52:48.372824 | orchestrator | 2026-01-02 01:52:48 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:52:48.376127 | orchestrator | 2026-01-02 01:52:48 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:52:48.376150 | orchestrator | 2026-01-02 01:52:48 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:52:51.418708 | orchestrator | 2026-01-02 01:52:51 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:52:51.420577 | orchestrator | 2026-01-02 01:52:51 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:52:51.420603 | orchestrator | 2026-01-02 01:52:51 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:52:54.467017 | orchestrator | 2026-01-02 01:52:54 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:52:54.468455 | orchestrator | 2026-01-02 01:52:54 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:52:54.468597 | orchestrator | 2026-01-02 01:52:54 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:52:57.513844 | orchestrator | 2026-01-02 01:52:57 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:52:57.515324 | orchestrator | 2026-01-02 01:52:57 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:52:57.515375 | orchestrator | 2026-01-02 01:52:57 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:53:00.561230 | orchestrator | 2026-01-02 01:53:00 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:53:00.561496 | orchestrator | 2026-01-02 01:53:00 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:53:00.561704 | orchestrator | 2026-01-02 01:53:00 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:53:03.605425 | orchestrator | 2026-01-02 01:53:03 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:53:03.606656 | orchestrator | 2026-01-02 01:53:03 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:53:03.607130 | orchestrator | 2026-01-02 01:53:03 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:53:06.654243 | orchestrator | 2026-01-02 01:53:06 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:53:06.655921 | orchestrator | 2026-01-02 01:53:06 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:53:06.656197 | orchestrator | 2026-01-02 01:53:06 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:53:09.703208 | orchestrator | 2026-01-02 01:53:09 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:53:09.704794 | orchestrator | 2026-01-02 01:53:09 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:53:09.704828 | orchestrator | 2026-01-02 01:53:09 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:53:12.746710 | orchestrator | 2026-01-02 01:53:12 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:53:12.748707 | orchestrator | 2026-01-02 01:53:12 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:53:12.748768 | orchestrator | 2026-01-02 01:53:12 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:53:15.794333 | orchestrator | 2026-01-02 01:53:15 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:53:15.795886 | orchestrator | 2026-01-02 01:53:15 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:53:15.795907 | orchestrator | 2026-01-02 01:53:15 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:53:18.845417 | orchestrator | 2026-01-02 01:53:18 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:53:18.847170 | orchestrator | 2026-01-02 01:53:18 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:53:18.847274 | orchestrator | 2026-01-02 01:53:18 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:53:21.897040 | orchestrator | 2026-01-02 01:53:21 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:53:21.898449 | orchestrator | 2026-01-02 01:53:21 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:53:21.898543 | orchestrator | 2026-01-02 01:53:21 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:53:24.940763 | orchestrator | 2026-01-02 01:53:24 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:53:24.942593 | orchestrator | 2026-01-02 01:53:24 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:53:24.942645 | orchestrator | 2026-01-02 01:53:24 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:53:27.984799 | orchestrator | 2026-01-02 01:53:27 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:53:27.986561 | orchestrator | 2026-01-02 01:53:27 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:53:27.986688 | orchestrator | 2026-01-02 01:53:27 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:53:31.031890 | orchestrator | 2026-01-02 01:53:31 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:53:31.034268 | orchestrator | 2026-01-02 01:53:31 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:53:31.034321 | orchestrator | 2026-01-02 01:53:31 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:53:34.086001 | orchestrator | 2026-01-02 01:53:34 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:53:34.088042 | orchestrator | 2026-01-02 01:53:34 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:53:34.088186 | orchestrator | 2026-01-02 01:53:34 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:53:37.141089 | orchestrator | 2026-01-02 01:53:37 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:53:37.142981 | orchestrator | 2026-01-02 01:53:37 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:53:37.143375 | orchestrator | 2026-01-02 01:53:37 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:53:40.193421 | orchestrator | 2026-01-02 01:53:40 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:53:40.195404 | orchestrator | 2026-01-02 01:53:40 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:53:40.195545 | orchestrator | 2026-01-02 01:53:40 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:53:43.242594 | orchestrator | 2026-01-02 01:53:43 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:53:43.244031 | orchestrator | 2026-01-02 01:53:43 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:53:43.244079 | orchestrator | 2026-01-02 01:53:43 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:53:46.282760 | orchestrator | 2026-01-02 01:53:46 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:53:46.284166 | orchestrator | 2026-01-02 01:53:46 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:53:46.284201 | orchestrator | 2026-01-02 01:53:46 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:53:49.324185 | orchestrator | 2026-01-02 01:53:49 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:53:49.326388 | orchestrator | 2026-01-02 01:53:49 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:53:49.326472 | orchestrator | 2026-01-02 01:53:49 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:53:52.372353 | orchestrator | 2026-01-02 01:53:52 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:53:52.373602 | orchestrator | 2026-01-02 01:53:52 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:53:52.373638 | orchestrator | 2026-01-02 01:53:52 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:53:55.419247 | orchestrator | 2026-01-02 01:53:55 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:53:55.421470 | orchestrator | 2026-01-02 01:53:55 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:53:55.421497 | orchestrator | 2026-01-02 01:53:55 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:53:58.467457 | orchestrator | 2026-01-02 01:53:58 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:53:58.469400 | orchestrator | 2026-01-02 01:53:58 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:53:58.469447 | orchestrator | 2026-01-02 01:53:58 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:54:01.505742 | orchestrator | 2026-01-02 01:54:01 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:54:01.507312 | orchestrator | 2026-01-02 01:54:01 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:54:01.507367 | orchestrator | 2026-01-02 01:54:01 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:54:04.553598 | orchestrator | 2026-01-02 01:54:04 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:54:04.555559 | orchestrator | 2026-01-02 01:54:04 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:54:04.555618 | orchestrator | 2026-01-02 01:54:04 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:54:07.602379 | orchestrator | 2026-01-02 01:54:07 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:54:07.602837 | orchestrator | 2026-01-02 01:54:07 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:54:07.602906 | orchestrator | 2026-01-02 01:54:07 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:54:10.653210 | orchestrator | 2026-01-02 01:54:10 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:54:10.654252 | orchestrator | 2026-01-02 01:54:10 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:54:10.654364 | orchestrator | 2026-01-02 01:54:10 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:54:13.699589 | orchestrator | 2026-01-02 01:54:13 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:54:13.700706 | orchestrator | 2026-01-02 01:54:13 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:54:13.700946 | orchestrator | 2026-01-02 01:54:13 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:54:16.737292 | orchestrator | 2026-01-02 01:54:16 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:54:16.738890 | orchestrator | 2026-01-02 01:54:16 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:54:16.738927 | orchestrator | 2026-01-02 01:54:16 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:54:19.780087 | orchestrator | 2026-01-02 01:54:19 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:54:19.781613 | orchestrator | 2026-01-02 01:54:19 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:54:19.781656 | orchestrator | 2026-01-02 01:54:19 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:54:22.832276 | orchestrator | 2026-01-02 01:54:22 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:54:22.833203 | orchestrator | 2026-01-02 01:54:22 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:54:22.833291 | orchestrator | 2026-01-02 01:54:22 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:54:25.878579 | orchestrator | 2026-01-02 01:54:25 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:54:25.882550 | orchestrator | 2026-01-02 01:54:25 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:54:25.882591 | orchestrator | 2026-01-02 01:54:25 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:54:28.934703 | orchestrator | 2026-01-02 01:54:28 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:54:28.937328 | orchestrator | 2026-01-02 01:54:28 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:54:28.937346 | orchestrator | 2026-01-02 01:54:28 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:54:31.984138 | orchestrator | 2026-01-02 01:54:31 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:54:31.985528 | orchestrator | 2026-01-02 01:54:31 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:54:31.985750 | orchestrator | 2026-01-02 01:54:31 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:54:35.036697 | orchestrator | 2026-01-02 01:54:35 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:54:35.038224 | orchestrator | 2026-01-02 01:54:35 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:54:35.038321 | orchestrator | 2026-01-02 01:54:35 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:54:38.085591 | orchestrator | 2026-01-02 01:54:38 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:54:38.087854 | orchestrator | 2026-01-02 01:54:38 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:54:38.088077 | orchestrator | 2026-01-02 01:54:38 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:54:41.136122 | orchestrator | 2026-01-02 01:54:41 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:54:41.137723 | orchestrator | 2026-01-02 01:54:41 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:54:41.137769 | orchestrator | 2026-01-02 01:54:41 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:54:44.188051 | orchestrator | 2026-01-02 01:54:44 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:54:44.189862 | orchestrator | 2026-01-02 01:54:44 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:54:44.189934 | orchestrator | 2026-01-02 01:54:44 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:54:47.244080 | orchestrator | 2026-01-02 01:54:47 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:54:47.245195 | orchestrator | 2026-01-02 01:54:47 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:54:47.245239 | orchestrator | 2026-01-02 01:54:47 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:54:50.290607 | orchestrator | 2026-01-02 01:54:50 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:54:50.294826 | orchestrator | 2026-01-02 01:54:50 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:54:50.294975 | orchestrator | 2026-01-02 01:54:50 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:54:53.347555 | orchestrator | 2026-01-02 01:54:53 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:54:53.348722 | orchestrator | 2026-01-02 01:54:53 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:54:53.348804 | orchestrator | 2026-01-02 01:54:53 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:54:56.392783 | orchestrator | 2026-01-02 01:54:56 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:54:56.393189 | orchestrator | 2026-01-02 01:54:56 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:54:56.393719 | orchestrator | 2026-01-02 01:54:56 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:54:59.445014 | orchestrator | 2026-01-02 01:54:59 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:54:59.447207 | orchestrator | 2026-01-02 01:54:59 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:54:59.447244 | orchestrator | 2026-01-02 01:54:59 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:55:02.497396 | orchestrator | 2026-01-02 01:55:02 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:55:02.499679 | orchestrator | 2026-01-02 01:55:02 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:55:02.499784 | orchestrator | 2026-01-02 01:55:02 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:55:05.551581 | orchestrator | 2026-01-02 01:55:05 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:55:05.552978 | orchestrator | 2026-01-02 01:55:05 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:55:05.553019 | orchestrator | 2026-01-02 01:55:05 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:55:08.608407 | orchestrator | 2026-01-02 01:55:08 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:55:08.611055 | orchestrator | 2026-01-02 01:55:08 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:55:08.611093 | orchestrator | 2026-01-02 01:55:08 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:55:11.660385 | orchestrator | 2026-01-02 01:55:11 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:55:11.661542 | orchestrator | 2026-01-02 01:55:11 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:55:11.661663 | orchestrator | 2026-01-02 01:55:11 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:55:14.727736 | orchestrator | 2026-01-02 01:55:14 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:55:14.727863 | orchestrator | 2026-01-02 01:55:14 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:55:14.727882 | orchestrator | 2026-01-02 01:55:14 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:55:17.764642 | orchestrator | 2026-01-02 01:55:17 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:55:17.768204 | orchestrator | 2026-01-02 01:55:17 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:55:17.768261 | orchestrator | 2026-01-02 01:55:17 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:55:20.814592 | orchestrator | 2026-01-02 01:55:20 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:55:20.817143 | orchestrator | 2026-01-02 01:55:20 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:55:20.817239 | orchestrator | 2026-01-02 01:55:20 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:55:23.866010 | orchestrator | 2026-01-02 01:55:23 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:55:23.867250 | orchestrator | 2026-01-02 01:55:23 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:55:23.867463 | orchestrator | 2026-01-02 01:55:23 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:55:26.917816 | orchestrator | 2026-01-02 01:55:26 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:55:26.919182 | orchestrator | 2026-01-02 01:55:26 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:55:26.919232 | orchestrator | 2026-01-02 01:55:26 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:55:29.970351 | orchestrator | 2026-01-02 01:55:29 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:55:29.972012 | orchestrator | 2026-01-02 01:55:29 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:55:29.972933 | orchestrator | 2026-01-02 01:55:29 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:55:33.029738 | orchestrator | 2026-01-02 01:55:33 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:55:33.032969 | orchestrator | 2026-01-02 01:55:33 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:55:33.033012 | orchestrator | 2026-01-02 01:55:33 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:55:36.083339 | orchestrator | 2026-01-02 01:55:36 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:55:36.087375 | orchestrator | 2026-01-02 01:55:36 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:55:36.087498 | orchestrator | 2026-01-02 01:55:36 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:55:39.137892 | orchestrator | 2026-01-02 01:55:39 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:55:39.140331 | orchestrator | 2026-01-02 01:55:39 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:55:39.140457 | orchestrator | 2026-01-02 01:55:39 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:55:42.189128 | orchestrator | 2026-01-02 01:55:42 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:55:42.191984 | orchestrator | 2026-01-02 01:55:42 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:55:42.192120 | orchestrator | 2026-01-02 01:55:42 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:55:45.240790 | orchestrator | 2026-01-02 01:55:45 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:55:45.243238 | orchestrator | 2026-01-02 01:55:45 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:55:45.243287 | orchestrator | 2026-01-02 01:55:45 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:55:48.290525 | orchestrator | 2026-01-02 01:55:48 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:55:48.293177 | orchestrator | 2026-01-02 01:55:48 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:55:48.293249 | orchestrator | 2026-01-02 01:55:48 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:55:51.340145 | orchestrator | 2026-01-02 01:55:51 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:55:51.341739 | orchestrator | 2026-01-02 01:55:51 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:55:51.341828 | orchestrator | 2026-01-02 01:55:51 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:55:54.388893 | orchestrator | 2026-01-02 01:55:54 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:55:54.391249 | orchestrator | 2026-01-02 01:55:54 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:55:54.391529 | orchestrator | 2026-01-02 01:55:54 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:55:57.438429 | orchestrator | 2026-01-02 01:55:57 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:55:57.439843 | orchestrator | 2026-01-02 01:55:57 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:55:57.440903 | orchestrator | 2026-01-02 01:55:57 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:56:00.486366 | orchestrator | 2026-01-02 01:56:00 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:56:00.488078 | orchestrator | 2026-01-02 01:56:00 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:56:00.488116 | orchestrator | 2026-01-02 01:56:00 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:56:03.530204 | orchestrator | 2026-01-02 01:56:03 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:56:03.531420 | orchestrator | 2026-01-02 01:56:03 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:56:03.531490 | orchestrator | 2026-01-02 01:56:03 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:56:06.572988 | orchestrator | 2026-01-02 01:56:06 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:56:06.574480 | orchestrator | 2026-01-02 01:56:06 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:56:06.574600 | orchestrator | 2026-01-02 01:56:06 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:56:09.622615 | orchestrator | 2026-01-02 01:56:09 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:56:09.625742 | orchestrator | 2026-01-02 01:56:09 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:56:09.625785 | orchestrator | 2026-01-02 01:56:09 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:56:12.669981 | orchestrator | 2026-01-02 01:56:12 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:56:12.672150 | orchestrator | 2026-01-02 01:56:12 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:56:12.672195 | orchestrator | 2026-01-02 01:56:12 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:56:15.721414 | orchestrator | 2026-01-02 01:56:15 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:56:15.724787 | orchestrator | 2026-01-02 01:56:15 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:56:15.724824 | orchestrator | 2026-01-02 01:56:15 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:56:18.768403 | orchestrator | 2026-01-02 01:56:18 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:56:18.769782 | orchestrator | 2026-01-02 01:56:18 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:56:18.769814 | orchestrator | 2026-01-02 01:56:18 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:56:21.817728 | orchestrator | 2026-01-02 01:56:21 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:56:21.818454 | orchestrator | 2026-01-02 01:56:21 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:56:21.818480 | orchestrator | 2026-01-02 01:56:21 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:56:24.863706 | orchestrator | 2026-01-02 01:56:24 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:56:24.865240 | orchestrator | 2026-01-02 01:56:24 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:56:24.865295 | orchestrator | 2026-01-02 01:56:24 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:56:27.911096 | orchestrator | 2026-01-02 01:56:27 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:56:27.912339 | orchestrator | 2026-01-02 01:56:27 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:56:27.912420 | orchestrator | 2026-01-02 01:56:27 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:56:30.963157 | orchestrator | 2026-01-02 01:56:30 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:56:30.965080 | orchestrator | 2026-01-02 01:56:30 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:56:30.965094 | orchestrator | 2026-01-02 01:56:30 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:56:34.011533 | orchestrator | 2026-01-02 01:56:34 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:56:34.013828 | orchestrator | 2026-01-02 01:56:34 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:56:34.014107 | orchestrator | 2026-01-02 01:56:34 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:56:37.058757 | orchestrator | 2026-01-02 01:56:37 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:56:37.058863 | orchestrator | 2026-01-02 01:56:37 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:56:37.059049 | orchestrator | 2026-01-02 01:56:37 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:56:40.104147 | orchestrator | 2026-01-02 01:56:40 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:56:40.105622 | orchestrator | 2026-01-02 01:56:40 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:56:40.105701 | orchestrator | 2026-01-02 01:56:40 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:56:43.151166 | orchestrator | 2026-01-02 01:56:43 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:56:43.152675 | orchestrator | 2026-01-02 01:56:43 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:56:43.152716 | orchestrator | 2026-01-02 01:56:43 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:56:46.195847 | orchestrator | 2026-01-02 01:56:46 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:56:46.198160 | orchestrator | 2026-01-02 01:56:46 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:56:46.198303 | orchestrator | 2026-01-02 01:56:46 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:56:49.244827 | orchestrator | 2026-01-02 01:56:49 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:56:49.246578 | orchestrator | 2026-01-02 01:56:49 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:56:49.246683 | orchestrator | 2026-01-02 01:56:49 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:56:52.290366 | orchestrator | 2026-01-02 01:56:52 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:56:52.292013 | orchestrator | 2026-01-02 01:56:52 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:56:52.292440 | orchestrator | 2026-01-02 01:56:52 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:56:55.340379 | orchestrator | 2026-01-02 01:56:55 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:56:55.340674 | orchestrator | 2026-01-02 01:56:55 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:56:55.340703 | orchestrator | 2026-01-02 01:56:55 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:56:58.381362 | orchestrator | 2026-01-02 01:56:58 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:56:58.382904 | orchestrator | 2026-01-02 01:56:58 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:56:58.383044 | orchestrator | 2026-01-02 01:56:58 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:57:01.421424 | orchestrator | 2026-01-02 01:57:01 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:57:01.423729 | orchestrator | 2026-01-02 01:57:01 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:57:01.423766 | orchestrator | 2026-01-02 01:57:01 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:57:04.465433 | orchestrator | 2026-01-02 01:57:04 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:57:04.468239 | orchestrator | 2026-01-02 01:57:04 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:57:04.468322 | orchestrator | 2026-01-02 01:57:04 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:57:07.513890 | orchestrator | 2026-01-02 01:57:07 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:57:07.514261 | orchestrator | 2026-01-02 01:57:07 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:57:07.514344 | orchestrator | 2026-01-02 01:57:07 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:57:10.563212 | orchestrator | 2026-01-02 01:57:10 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:57:10.565030 | orchestrator | 2026-01-02 01:57:10 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:57:10.565167 | orchestrator | 2026-01-02 01:57:10 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:57:13.615022 | orchestrator | 2026-01-02 01:57:13 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:57:13.617408 | orchestrator | 2026-01-02 01:57:13 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:57:13.617486 | orchestrator | 2026-01-02 01:57:13 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:57:16.658193 | orchestrator | 2026-01-02 01:57:16 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:57:16.658984 | orchestrator | 2026-01-02 01:57:16 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:57:16.659007 | orchestrator | 2026-01-02 01:57:16 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:57:19.707364 | orchestrator | 2026-01-02 01:57:19 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:57:19.708861 | orchestrator | 2026-01-02 01:57:19 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:57:19.708976 | orchestrator | 2026-01-02 01:57:19 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:57:22.759339 | orchestrator | 2026-01-02 01:57:22 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:57:22.760485 | orchestrator | 2026-01-02 01:57:22 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:57:22.760521 | orchestrator | 2026-01-02 01:57:22 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:57:25.804527 | orchestrator | 2026-01-02 01:57:25 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:57:25.806501 | orchestrator | 2026-01-02 01:57:25 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:57:25.807198 | orchestrator | 2026-01-02 01:57:25 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:57:28.853833 | orchestrator | 2026-01-02 01:57:28 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:57:28.855234 | orchestrator | 2026-01-02 01:57:28 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:57:28.855289 | orchestrator | 2026-01-02 01:57:28 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:57:31.903758 | orchestrator | 2026-01-02 01:57:31 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:57:31.904738 | orchestrator | 2026-01-02 01:57:31 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:57:31.904769 | orchestrator | 2026-01-02 01:57:31 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:57:34.952914 | orchestrator | 2026-01-02 01:57:34 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:57:34.954957 | orchestrator | 2026-01-02 01:57:34 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:57:34.955080 | orchestrator | 2026-01-02 01:57:34 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:57:38.003449 | orchestrator | 2026-01-02 01:57:38 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:57:38.005763 | orchestrator | 2026-01-02 01:57:38 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:57:38.005819 | orchestrator | 2026-01-02 01:57:38 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:57:41.051773 | orchestrator | 2026-01-02 01:57:41 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:57:41.053118 | orchestrator | 2026-01-02 01:57:41 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:57:41.053249 | orchestrator | 2026-01-02 01:57:41 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:57:44.098206 | orchestrator | 2026-01-02 01:57:44 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:57:44.100297 | orchestrator | 2026-01-02 01:57:44 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:57:44.100337 | orchestrator | 2026-01-02 01:57:44 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:57:47.143858 | orchestrator | 2026-01-02 01:57:47 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:57:47.147008 | orchestrator | 2026-01-02 01:57:47 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:57:47.147047 | orchestrator | 2026-01-02 01:57:47 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:57:50.195962 | orchestrator | 2026-01-02 01:57:50 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:57:50.198404 | orchestrator | 2026-01-02 01:57:50 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:57:50.198552 | orchestrator | 2026-01-02 01:57:50 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:57:53.238767 | orchestrator | 2026-01-02 01:57:53 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:57:53.240570 | orchestrator | 2026-01-02 01:57:53 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:57:53.240608 | orchestrator | 2026-01-02 01:57:53 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:57:56.286360 | orchestrator | 2026-01-02 01:57:56 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:57:56.287548 | orchestrator | 2026-01-02 01:57:56 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:57:56.287644 | orchestrator | 2026-01-02 01:57:56 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:57:59.333305 | orchestrator | 2026-01-02 01:57:59 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:57:59.333494 | orchestrator | 2026-01-02 01:57:59 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:57:59.333517 | orchestrator | 2026-01-02 01:57:59 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:58:02.374634 | orchestrator | 2026-01-02 01:58:02 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:58:02.375430 | orchestrator | 2026-01-02 01:58:02 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:58:02.375461 | orchestrator | 2026-01-02 01:58:02 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:58:05.420528 | orchestrator | 2026-01-02 01:58:05 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:58:05.421544 | orchestrator | 2026-01-02 01:58:05 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:58:05.421579 | orchestrator | 2026-01-02 01:58:05 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:58:08.466005 | orchestrator | 2026-01-02 01:58:08 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:58:08.467842 | orchestrator | 2026-01-02 01:58:08 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:58:08.467977 | orchestrator | 2026-01-02 01:58:08 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:58:11.509217 | orchestrator | 2026-01-02 01:58:11 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:58:11.510552 | orchestrator | 2026-01-02 01:58:11 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:58:11.510675 | orchestrator | 2026-01-02 01:58:11 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:58:14.551272 | orchestrator | 2026-01-02 01:58:14 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:58:14.553476 | orchestrator | 2026-01-02 01:58:14 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:58:14.553634 | orchestrator | 2026-01-02 01:58:14 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:58:17.598243 | orchestrator | 2026-01-02 01:58:17 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:58:17.600315 | orchestrator | 2026-01-02 01:58:17 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:58:17.600343 | orchestrator | 2026-01-02 01:58:17 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:58:20.646233 | orchestrator | 2026-01-02 01:58:20 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:58:20.647615 | orchestrator | 2026-01-02 01:58:20 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:58:20.647644 | orchestrator | 2026-01-02 01:58:20 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:58:23.695242 | orchestrator | 2026-01-02 01:58:23 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:58:23.697755 | orchestrator | 2026-01-02 01:58:23 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:58:23.698266 | orchestrator | 2026-01-02 01:58:23 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:58:26.754352 | orchestrator | 2026-01-02 01:58:26 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:58:26.756804 | orchestrator | 2026-01-02 01:58:26 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:58:26.756846 | orchestrator | 2026-01-02 01:58:26 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:58:29.804426 | orchestrator | 2026-01-02 01:58:29 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:58:29.807289 | orchestrator | 2026-01-02 01:58:29 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:58:29.807338 | orchestrator | 2026-01-02 01:58:29 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:58:32.856031 | orchestrator | 2026-01-02 01:58:32 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:58:32.856899 | orchestrator | 2026-01-02 01:58:32 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:58:32.857111 | orchestrator | 2026-01-02 01:58:32 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:58:35.902932 | orchestrator | 2026-01-02 01:58:35 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:58:35.905355 | orchestrator | 2026-01-02 01:58:35 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:58:35.905467 | orchestrator | 2026-01-02 01:58:35 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:58:38.953602 | orchestrator | 2026-01-02 01:58:38 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:58:38.957539 | orchestrator | 2026-01-02 01:58:38 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:58:38.958225 | orchestrator | 2026-01-02 01:58:38 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:58:42.019703 | orchestrator | 2026-01-02 01:58:42 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:58:42.022847 | orchestrator | 2026-01-02 01:58:42 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:58:42.022901 | orchestrator | 2026-01-02 01:58:42 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:58:45.071705 | orchestrator | 2026-01-02 01:58:45 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:58:45.073322 | orchestrator | 2026-01-02 01:58:45 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:58:45.073492 | orchestrator | 2026-01-02 01:58:45 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:58:48.121933 | orchestrator | 2026-01-02 01:58:48 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:58:48.123406 | orchestrator | 2026-01-02 01:58:48 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:58:48.123423 | orchestrator | 2026-01-02 01:58:48 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:58:51.169553 | orchestrator | 2026-01-02 01:58:51 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:58:51.172040 | orchestrator | 2026-01-02 01:58:51 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:58:51.172071 | orchestrator | 2026-01-02 01:58:51 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:58:54.210465 | orchestrator | 2026-01-02 01:58:54 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:58:54.211764 | orchestrator | 2026-01-02 01:58:54 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:58:54.211806 | orchestrator | 2026-01-02 01:58:54 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:58:57.254333 | orchestrator | 2026-01-02 01:58:57 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:58:57.255181 | orchestrator | 2026-01-02 01:58:57 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:58:57.255342 | orchestrator | 2026-01-02 01:58:57 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:59:00.298859 | orchestrator | 2026-01-02 01:59:00 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:59:00.300562 | orchestrator | 2026-01-02 01:59:00 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:59:00.300730 | orchestrator | 2026-01-02 01:59:00 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:59:03.351668 | orchestrator | 2026-01-02 01:59:03 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:59:03.352713 | orchestrator | 2026-01-02 01:59:03 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:59:03.352754 | orchestrator | 2026-01-02 01:59:03 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:59:06.396330 | orchestrator | 2026-01-02 01:59:06 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:59:06.397814 | orchestrator | 2026-01-02 01:59:06 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:59:06.397918 | orchestrator | 2026-01-02 01:59:06 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:59:09.444349 | orchestrator | 2026-01-02 01:59:09 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:59:09.445320 | orchestrator | 2026-01-02 01:59:09 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:59:09.445520 | orchestrator | 2026-01-02 01:59:09 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:59:12.486552 | orchestrator | 2026-01-02 01:59:12 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:59:12.489300 | orchestrator | 2026-01-02 01:59:12 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:59:12.489337 | orchestrator | 2026-01-02 01:59:12 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:59:15.536390 | orchestrator | 2026-01-02 01:59:15 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:59:15.537204 | orchestrator | 2026-01-02 01:59:15 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:59:15.537242 | orchestrator | 2026-01-02 01:59:15 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:59:18.587228 | orchestrator | 2026-01-02 01:59:18 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:59:18.588819 | orchestrator | 2026-01-02 01:59:18 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:59:18.588923 | orchestrator | 2026-01-02 01:59:18 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:59:21.639321 | orchestrator | 2026-01-02 01:59:21 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:59:21.641094 | orchestrator | 2026-01-02 01:59:21 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:59:21.641126 | orchestrator | 2026-01-02 01:59:21 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:59:24.688214 | orchestrator | 2026-01-02 01:59:24 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:59:24.689768 | orchestrator | 2026-01-02 01:59:24 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:59:24.689852 | orchestrator | 2026-01-02 01:59:24 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:59:27.731471 | orchestrator | 2026-01-02 01:59:27 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:59:27.733173 | orchestrator | 2026-01-02 01:59:27 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:59:27.733215 | orchestrator | 2026-01-02 01:59:27 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:59:30.776480 | orchestrator | 2026-01-02 01:59:30 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:59:30.777835 | orchestrator | 2026-01-02 01:59:30 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:59:30.777870 | orchestrator | 2026-01-02 01:59:30 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:59:33.832739 | orchestrator | 2026-01-02 01:59:33 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:59:33.834759 | orchestrator | 2026-01-02 01:59:33 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:59:33.834834 | orchestrator | 2026-01-02 01:59:33 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:59:36.886093 | orchestrator | 2026-01-02 01:59:36 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:59:36.887259 | orchestrator | 2026-01-02 01:59:36 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:59:36.887493 | orchestrator | 2026-01-02 01:59:36 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:59:39.932168 | orchestrator | 2026-01-02 01:59:39 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:59:39.933832 | orchestrator | 2026-01-02 01:59:39 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:59:39.933878 | orchestrator | 2026-01-02 01:59:39 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:59:42.975137 | orchestrator | 2026-01-02 01:59:42 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:59:42.976469 | orchestrator | 2026-01-02 01:59:42 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:59:42.976507 | orchestrator | 2026-01-02 01:59:42 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:59:46.015908 | orchestrator | 2026-01-02 01:59:46 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:59:46.017721 | orchestrator | 2026-01-02 01:59:46 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:59:46.017765 | orchestrator | 2026-01-02 01:59:46 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:59:49.059244 | orchestrator | 2026-01-02 01:59:49 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:59:49.060940 | orchestrator | 2026-01-02 01:59:49 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:59:49.061049 | orchestrator | 2026-01-02 01:59:49 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:59:52.109487 | orchestrator | 2026-01-02 01:59:52 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:59:52.112741 | orchestrator | 2026-01-02 01:59:52 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:59:52.112897 | orchestrator | 2026-01-02 01:59:52 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:59:55.162583 | orchestrator | 2026-01-02 01:59:55 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:59:55.164223 | orchestrator | 2026-01-02 01:59:55 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:59:55.164257 | orchestrator | 2026-01-02 01:59:55 | INFO  | Wait 1 second(s) until the next check 2026-01-02 01:59:58.213851 | orchestrator | 2026-01-02 01:59:58 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 01:59:58.216990 | orchestrator | 2026-01-02 01:59:58 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 01:59:58.217066 | orchestrator | 2026-01-02 01:59:58 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:00:01.254287 | orchestrator | 2026-01-02 02:00:01 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:00:01.255908 | orchestrator | 2026-01-02 02:00:01 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:00:01.255947 | orchestrator | 2026-01-02 02:00:01 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:00:04.298550 | orchestrator | 2026-01-02 02:00:04 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:00:04.301776 | orchestrator | 2026-01-02 02:00:04 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:00:04.301798 | orchestrator | 2026-01-02 02:00:04 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:00:07.342150 | orchestrator | 2026-01-02 02:00:07 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:00:07.345480 | orchestrator | 2026-01-02 02:00:07 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:00:07.345544 | orchestrator | 2026-01-02 02:00:07 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:00:10.392268 | orchestrator | 2026-01-02 02:00:10 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:00:10.394653 | orchestrator | 2026-01-02 02:00:10 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:00:10.394787 | orchestrator | 2026-01-02 02:00:10 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:00:13.443734 | orchestrator | 2026-01-02 02:00:13 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:00:13.445343 | orchestrator | 2026-01-02 02:00:13 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:00:13.445502 | orchestrator | 2026-01-02 02:00:13 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:00:16.491498 | orchestrator | 2026-01-02 02:00:16 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:00:16.492366 | orchestrator | 2026-01-02 02:00:16 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:00:16.492401 | orchestrator | 2026-01-02 02:00:16 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:00:19.541395 | orchestrator | 2026-01-02 02:00:19 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:00:19.543053 | orchestrator | 2026-01-02 02:00:19 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:00:19.543083 | orchestrator | 2026-01-02 02:00:19 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:00:22.590280 | orchestrator | 2026-01-02 02:00:22 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:00:22.591896 | orchestrator | 2026-01-02 02:00:22 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:00:22.591915 | orchestrator | 2026-01-02 02:00:22 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:00:25.638139 | orchestrator | 2026-01-02 02:00:25 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:00:25.641031 | orchestrator | 2026-01-02 02:00:25 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:00:25.641099 | orchestrator | 2026-01-02 02:00:25 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:00:28.689744 | orchestrator | 2026-01-02 02:00:28 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:00:28.691355 | orchestrator | 2026-01-02 02:00:28 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:00:28.691393 | orchestrator | 2026-01-02 02:00:28 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:00:31.733797 | orchestrator | 2026-01-02 02:00:31 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:00:31.735037 | orchestrator | 2026-01-02 02:00:31 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:00:31.735077 | orchestrator | 2026-01-02 02:00:31 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:00:34.784340 | orchestrator | 2026-01-02 02:00:34 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:00:34.786193 | orchestrator | 2026-01-02 02:00:34 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:00:34.786210 | orchestrator | 2026-01-02 02:00:34 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:00:37.834180 | orchestrator | 2026-01-02 02:00:37 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:00:37.835879 | orchestrator | 2026-01-02 02:00:37 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:00:37.836149 | orchestrator | 2026-01-02 02:00:37 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:00:40.883360 | orchestrator | 2026-01-02 02:00:40 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:00:40.885244 | orchestrator | 2026-01-02 02:00:40 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:00:40.885385 | orchestrator | 2026-01-02 02:00:40 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:00:43.929268 | orchestrator | 2026-01-02 02:00:43 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:00:43.931813 | orchestrator | 2026-01-02 02:00:43 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:00:43.932041 | orchestrator | 2026-01-02 02:00:43 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:00:46.969383 | orchestrator | 2026-01-02 02:00:46 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:00:46.970997 | orchestrator | 2026-01-02 02:00:46 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:00:46.971072 | orchestrator | 2026-01-02 02:00:46 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:00:50.013455 | orchestrator | 2026-01-02 02:00:50 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:00:50.015417 | orchestrator | 2026-01-02 02:00:50 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:00:50.015453 | orchestrator | 2026-01-02 02:00:50 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:00:53.057034 | orchestrator | 2026-01-02 02:00:53 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:00:53.057719 | orchestrator | 2026-01-02 02:00:53 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:00:53.057764 | orchestrator | 2026-01-02 02:00:53 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:00:56.102541 | orchestrator | 2026-01-02 02:00:56 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:00:56.105125 | orchestrator | 2026-01-02 02:00:56 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:00:56.105161 | orchestrator | 2026-01-02 02:00:56 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:00:59.151267 | orchestrator | 2026-01-02 02:00:59 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:00:59.151886 | orchestrator | 2026-01-02 02:00:59 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:00:59.151922 | orchestrator | 2026-01-02 02:00:59 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:01:02.199145 | orchestrator | 2026-01-02 02:01:02 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:01:02.200293 | orchestrator | 2026-01-02 02:01:02 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:01:02.200375 | orchestrator | 2026-01-02 02:01:02 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:01:05.246257 | orchestrator | 2026-01-02 02:01:05 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:01:05.247531 | orchestrator | 2026-01-02 02:01:05 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:01:05.248104 | orchestrator | 2026-01-02 02:01:05 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:01:08.292197 | orchestrator | 2026-01-02 02:01:08 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:01:08.293778 | orchestrator | 2026-01-02 02:01:08 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:01:08.293824 | orchestrator | 2026-01-02 02:01:08 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:01:11.339117 | orchestrator | 2026-01-02 02:01:11 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:01:11.340232 | orchestrator | 2026-01-02 02:01:11 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:01:11.340435 | orchestrator | 2026-01-02 02:01:11 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:01:14.385219 | orchestrator | 2026-01-02 02:01:14 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:01:14.387357 | orchestrator | 2026-01-02 02:01:14 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:01:14.387677 | orchestrator | 2026-01-02 02:01:14 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:01:17.432492 | orchestrator | 2026-01-02 02:01:17 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:01:17.435309 | orchestrator | 2026-01-02 02:01:17 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:01:17.435381 | orchestrator | 2026-01-02 02:01:17 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:01:20.481689 | orchestrator | 2026-01-02 02:01:20 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:01:20.482379 | orchestrator | 2026-01-02 02:01:20 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:01:20.482828 | orchestrator | 2026-01-02 02:01:20 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:01:23.526130 | orchestrator | 2026-01-02 02:01:23 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:01:23.527454 | orchestrator | 2026-01-02 02:01:23 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:01:23.527482 | orchestrator | 2026-01-02 02:01:23 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:01:26.574773 | orchestrator | 2026-01-02 02:01:26 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:01:26.576087 | orchestrator | 2026-01-02 02:01:26 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:01:26.576183 | orchestrator | 2026-01-02 02:01:26 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:01:29.623871 | orchestrator | 2026-01-02 02:01:29 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:01:29.625885 | orchestrator | 2026-01-02 02:01:29 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:01:29.625999 | orchestrator | 2026-01-02 02:01:29 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:01:32.670837 | orchestrator | 2026-01-02 02:01:32 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:01:32.672587 | orchestrator | 2026-01-02 02:01:32 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:01:32.672635 | orchestrator | 2026-01-02 02:01:32 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:01:35.725943 | orchestrator | 2026-01-02 02:01:35 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:01:35.729036 | orchestrator | 2026-01-02 02:01:35 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:01:35.729287 | orchestrator | 2026-01-02 02:01:35 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:01:38.774638 | orchestrator | 2026-01-02 02:01:38 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:01:38.775940 | orchestrator | 2026-01-02 02:01:38 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:01:38.776203 | orchestrator | 2026-01-02 02:01:38 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:01:41.822448 | orchestrator | 2026-01-02 02:01:41 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:01:41.824736 | orchestrator | 2026-01-02 02:01:41 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:01:41.824780 | orchestrator | 2026-01-02 02:01:41 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:01:44.865423 | orchestrator | 2026-01-02 02:01:44 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:01:44.866731 | orchestrator | 2026-01-02 02:01:44 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:01:44.866772 | orchestrator | 2026-01-02 02:01:44 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:01:47.901949 | orchestrator | 2026-01-02 02:01:47 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:01:47.904243 | orchestrator | 2026-01-02 02:01:47 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:01:47.904292 | orchestrator | 2026-01-02 02:01:47 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:01:50.950903 | orchestrator | 2026-01-02 02:01:50 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:01:50.953812 | orchestrator | 2026-01-02 02:01:50 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:01:50.953867 | orchestrator | 2026-01-02 02:01:50 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:01:54.001728 | orchestrator | 2026-01-02 02:01:54 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:01:54.003626 | orchestrator | 2026-01-02 02:01:54 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:01:54.003676 | orchestrator | 2026-01-02 02:01:54 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:01:57.052768 | orchestrator | 2026-01-02 02:01:57 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:01:57.055284 | orchestrator | 2026-01-02 02:01:57 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:01:57.055370 | orchestrator | 2026-01-02 02:01:57 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:02:00.099389 | orchestrator | 2026-01-02 02:02:00 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:02:00.102089 | orchestrator | 2026-01-02 02:02:00 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:02:00.102267 | orchestrator | 2026-01-02 02:02:00 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:02:03.146850 | orchestrator | 2026-01-02 02:02:03 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:02:03.151008 | orchestrator | 2026-01-02 02:02:03 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:02:03.151053 | orchestrator | 2026-01-02 02:02:03 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:02:06.200198 | orchestrator | 2026-01-02 02:02:06 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:02:06.203276 | orchestrator | 2026-01-02 02:02:06 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:02:06.203308 | orchestrator | 2026-01-02 02:02:06 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:02:09.243903 | orchestrator | 2026-01-02 02:02:09 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:02:09.245828 | orchestrator | 2026-01-02 02:02:09 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:02:09.245861 | orchestrator | 2026-01-02 02:02:09 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:02:12.290172 | orchestrator | 2026-01-02 02:02:12 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:02:12.292054 | orchestrator | 2026-01-02 02:02:12 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:02:12.292094 | orchestrator | 2026-01-02 02:02:12 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:02:15.337815 | orchestrator | 2026-01-02 02:02:15 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:02:15.340146 | orchestrator | 2026-01-02 02:02:15 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:02:15.340337 | orchestrator | 2026-01-02 02:02:15 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:02:18.384442 | orchestrator | 2026-01-02 02:02:18 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:02:18.385630 | orchestrator | 2026-01-02 02:02:18 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:02:18.385680 | orchestrator | 2026-01-02 02:02:18 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:02:21.434565 | orchestrator | 2026-01-02 02:02:21 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:02:21.437796 | orchestrator | 2026-01-02 02:02:21 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:02:21.437913 | orchestrator | 2026-01-02 02:02:21 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:02:24.482120 | orchestrator | 2026-01-02 02:02:24 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:02:24.484030 | orchestrator | 2026-01-02 02:02:24 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:02:24.484150 | orchestrator | 2026-01-02 02:02:24 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:02:27.528817 | orchestrator | 2026-01-02 02:02:27 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:02:27.530265 | orchestrator | 2026-01-02 02:02:27 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:02:27.530302 | orchestrator | 2026-01-02 02:02:27 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:02:30.575454 | orchestrator | 2026-01-02 02:02:30 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:02:30.579074 | orchestrator | 2026-01-02 02:02:30 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:02:30.579144 | orchestrator | 2026-01-02 02:02:30 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:02:33.620274 | orchestrator | 2026-01-02 02:02:33 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:02:33.621945 | orchestrator | 2026-01-02 02:02:33 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:02:33.622259 | orchestrator | 2026-01-02 02:02:33 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:02:36.670670 | orchestrator | 2026-01-02 02:02:36 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:02:36.671729 | orchestrator | 2026-01-02 02:02:36 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:02:36.671761 | orchestrator | 2026-01-02 02:02:36 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:02:39.714570 | orchestrator | 2026-01-02 02:02:39 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:02:39.716540 | orchestrator | 2026-01-02 02:02:39 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:02:39.716593 | orchestrator | 2026-01-02 02:02:39 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:02:42.764442 | orchestrator | 2026-01-02 02:02:42 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:02:42.766510 | orchestrator | 2026-01-02 02:02:42 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:02:42.766554 | orchestrator | 2026-01-02 02:02:42 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:02:45.813563 | orchestrator | 2026-01-02 02:02:45 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:02:45.815022 | orchestrator | 2026-01-02 02:02:45 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:02:45.815087 | orchestrator | 2026-01-02 02:02:45 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:02:48.856701 | orchestrator | 2026-01-02 02:02:48 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:02:48.857259 | orchestrator | 2026-01-02 02:02:48 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:02:48.857291 | orchestrator | 2026-01-02 02:02:48 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:02:51.900550 | orchestrator | 2026-01-02 02:02:51 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:02:51.902443 | orchestrator | 2026-01-02 02:02:51 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:02:51.902538 | orchestrator | 2026-01-02 02:02:51 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:02:54.952091 | orchestrator | 2026-01-02 02:02:54 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:02:54.953867 | orchestrator | 2026-01-02 02:02:54 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:02:54.953933 | orchestrator | 2026-01-02 02:02:54 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:02:58.003116 | orchestrator | 2026-01-02 02:02:58 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:02:58.005117 | orchestrator | 2026-01-02 02:02:58 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:02:58.005197 | orchestrator | 2026-01-02 02:02:58 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:03:01.047582 | orchestrator | 2026-01-02 02:03:01 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:03:01.050890 | orchestrator | 2026-01-02 02:03:01 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:03:01.051007 | orchestrator | 2026-01-02 02:03:01 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:03:04.107402 | orchestrator | 2026-01-02 02:03:04 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:03:04.108865 | orchestrator | 2026-01-02 02:03:04 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:03:04.109081 | orchestrator | 2026-01-02 02:03:04 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:03:07.165841 | orchestrator | 2026-01-02 02:03:07 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:03:07.167110 | orchestrator | 2026-01-02 02:03:07 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:03:07.167197 | orchestrator | 2026-01-02 02:03:07 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:03:10.218215 | orchestrator | 2026-01-02 02:03:10 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:03:10.219331 | orchestrator | 2026-01-02 02:03:10 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:03:10.219376 | orchestrator | 2026-01-02 02:03:10 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:03:13.265922 | orchestrator | 2026-01-02 02:03:13 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:03:13.269125 | orchestrator | 2026-01-02 02:03:13 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:03:13.269243 | orchestrator | 2026-01-02 02:03:13 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:03:16.311888 | orchestrator | 2026-01-02 02:03:16 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:03:16.315901 | orchestrator | 2026-01-02 02:03:16 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:03:16.315954 | orchestrator | 2026-01-02 02:03:16 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:03:19.364850 | orchestrator | 2026-01-02 02:03:19 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:03:19.366287 | orchestrator | 2026-01-02 02:03:19 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:03:19.366352 | orchestrator | 2026-01-02 02:03:19 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:03:22.406567 | orchestrator | 2026-01-02 02:03:22 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:03:22.408009 | orchestrator | 2026-01-02 02:03:22 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:03:22.408051 | orchestrator | 2026-01-02 02:03:22 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:03:25.454261 | orchestrator | 2026-01-02 02:03:25 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:03:25.455858 | orchestrator | 2026-01-02 02:03:25 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:03:25.455891 | orchestrator | 2026-01-02 02:03:25 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:03:28.505769 | orchestrator | 2026-01-02 02:03:28 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:03:28.507954 | orchestrator | 2026-01-02 02:03:28 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:03:28.508090 | orchestrator | 2026-01-02 02:03:28 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:03:31.552099 | orchestrator | 2026-01-02 02:03:31 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:03:31.554295 | orchestrator | 2026-01-02 02:03:31 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:03:31.554699 | orchestrator | 2026-01-02 02:03:31 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:03:34.599635 | orchestrator | 2026-01-02 02:03:34 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:03:34.601724 | orchestrator | 2026-01-02 02:03:34 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:03:34.601776 | orchestrator | 2026-01-02 02:03:34 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:03:37.645704 | orchestrator | 2026-01-02 02:03:37 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:03:37.647545 | orchestrator | 2026-01-02 02:03:37 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:03:37.647608 | orchestrator | 2026-01-02 02:03:37 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:03:40.695617 | orchestrator | 2026-01-02 02:03:40 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:03:40.697276 | orchestrator | 2026-01-02 02:03:40 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:03:40.697405 | orchestrator | 2026-01-02 02:03:40 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:03:43.740609 | orchestrator | 2026-01-02 02:03:43 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:03:43.742695 | orchestrator | 2026-01-02 02:03:43 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:03:43.742847 | orchestrator | 2026-01-02 02:03:43 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:03:46.781473 | orchestrator | 2026-01-02 02:03:46 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:03:46.783521 | orchestrator | 2026-01-02 02:03:46 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:03:46.783562 | orchestrator | 2026-01-02 02:03:46 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:03:49.827310 | orchestrator | 2026-01-02 02:03:49 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:03:49.828852 | orchestrator | 2026-01-02 02:03:49 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:03:49.828914 | orchestrator | 2026-01-02 02:03:49 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:03:52.873929 | orchestrator | 2026-01-02 02:03:52 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:03:52.875404 | orchestrator | 2026-01-02 02:03:52 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:03:52.875437 | orchestrator | 2026-01-02 02:03:52 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:03:55.922329 | orchestrator | 2026-01-02 02:03:55 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:03:55.924422 | orchestrator | 2026-01-02 02:03:55 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:03:55.924481 | orchestrator | 2026-01-02 02:03:55 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:03:58.969416 | orchestrator | 2026-01-02 02:03:58 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:03:58.971288 | orchestrator | 2026-01-02 02:03:58 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:03:58.971412 | orchestrator | 2026-01-02 02:03:58 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:04:02.022301 | orchestrator | 2026-01-02 02:04:02 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:04:02.025795 | orchestrator | 2026-01-02 02:04:02 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:04:02.025841 | orchestrator | 2026-01-02 02:04:02 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:04:05.066452 | orchestrator | 2026-01-02 02:04:05 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:04:05.068803 | orchestrator | 2026-01-02 02:04:05 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:04:05.068954 | orchestrator | 2026-01-02 02:04:05 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:04:08.110170 | orchestrator | 2026-01-02 02:04:08 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:04:08.111715 | orchestrator | 2026-01-02 02:04:08 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:04:08.111754 | orchestrator | 2026-01-02 02:04:08 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:04:11.164629 | orchestrator | 2026-01-02 02:04:11 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:04:11.166236 | orchestrator | 2026-01-02 02:04:11 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:04:11.166302 | orchestrator | 2026-01-02 02:04:11 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:04:14.210452 | orchestrator | 2026-01-02 02:04:14 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:04:14.213769 | orchestrator | 2026-01-02 02:04:14 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:04:14.213881 | orchestrator | 2026-01-02 02:04:14 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:04:17.259516 | orchestrator | 2026-01-02 02:04:17 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:04:17.260068 | orchestrator | 2026-01-02 02:04:17 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:04:17.260103 | orchestrator | 2026-01-02 02:04:17 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:04:20.311128 | orchestrator | 2026-01-02 02:04:20 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:04:20.314134 | orchestrator | 2026-01-02 02:04:20 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:04:20.314216 | orchestrator | 2026-01-02 02:04:20 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:04:23.362483 | orchestrator | 2026-01-02 02:04:23 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:04:23.365711 | orchestrator | 2026-01-02 02:04:23 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:04:23.365764 | orchestrator | 2026-01-02 02:04:23 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:04:26.418806 | orchestrator | 2026-01-02 02:04:26 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:04:26.420543 | orchestrator | 2026-01-02 02:04:26 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:04:26.420593 | orchestrator | 2026-01-02 02:04:26 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:04:29.467352 | orchestrator | 2026-01-02 02:04:29 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:04:29.468759 | orchestrator | 2026-01-02 02:04:29 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:04:29.468814 | orchestrator | 2026-01-02 02:04:29 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:04:32.510404 | orchestrator | 2026-01-02 02:04:32 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:04:32.512353 | orchestrator | 2026-01-02 02:04:32 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:04:32.512388 | orchestrator | 2026-01-02 02:04:32 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:04:35.559796 | orchestrator | 2026-01-02 02:04:35 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:04:35.561800 | orchestrator | 2026-01-02 02:04:35 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:04:35.561844 | orchestrator | 2026-01-02 02:04:35 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:04:38.610487 | orchestrator | 2026-01-02 02:04:38 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:04:38.612377 | orchestrator | 2026-01-02 02:04:38 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:04:38.612414 | orchestrator | 2026-01-02 02:04:38 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:04:41.656592 | orchestrator | 2026-01-02 02:04:41 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:04:41.658125 | orchestrator | 2026-01-02 02:04:41 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:04:41.658177 | orchestrator | 2026-01-02 02:04:41 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:04:44.706405 | orchestrator | 2026-01-02 02:04:44 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:04:44.709101 | orchestrator | 2026-01-02 02:04:44 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:04:44.709667 | orchestrator | 2026-01-02 02:04:44 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:04:47.758263 | orchestrator | 2026-01-02 02:04:47 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:04:47.760249 | orchestrator | 2026-01-02 02:04:47 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:04:47.760304 | orchestrator | 2026-01-02 02:04:47 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:04:50.811238 | orchestrator | 2026-01-02 02:04:50 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:04:50.812456 | orchestrator | 2026-01-02 02:04:50 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:04:50.812903 | orchestrator | 2026-01-02 02:04:50 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:04:53.856766 | orchestrator | 2026-01-02 02:04:53 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:04:53.859067 | orchestrator | 2026-01-02 02:04:53 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:04:53.859124 | orchestrator | 2026-01-02 02:04:53 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:04:56.906398 | orchestrator | 2026-01-02 02:04:56 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:04:56.908437 | orchestrator | 2026-01-02 02:04:56 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:04:56.908652 | orchestrator | 2026-01-02 02:04:56 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:04:59.956521 | orchestrator | 2026-01-02 02:04:59 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:04:59.959165 | orchestrator | 2026-01-02 02:04:59 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:04:59.959308 | orchestrator | 2026-01-02 02:04:59 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:05:03.008336 | orchestrator | 2026-01-02 02:05:03 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:05:03.009357 | orchestrator | 2026-01-02 02:05:03 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:05:03.009596 | orchestrator | 2026-01-02 02:05:03 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:05:06.059290 | orchestrator | 2026-01-02 02:05:06 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:05:06.061852 | orchestrator | 2026-01-02 02:05:06 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:05:06.061910 | orchestrator | 2026-01-02 02:05:06 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:05:09.100330 | orchestrator | 2026-01-02 02:05:09 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:05:09.102734 | orchestrator | 2026-01-02 02:05:09 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:05:09.102769 | orchestrator | 2026-01-02 02:05:09 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:05:12.147247 | orchestrator | 2026-01-02 02:05:12 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:05:12.148775 | orchestrator | 2026-01-02 02:05:12 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:05:12.149092 | orchestrator | 2026-01-02 02:05:12 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:05:15.199012 | orchestrator | 2026-01-02 02:05:15 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:05:15.200155 | orchestrator | 2026-01-02 02:05:15 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:05:15.200418 | orchestrator | 2026-01-02 02:05:15 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:05:18.244288 | orchestrator | 2026-01-02 02:05:18 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:05:18.245763 | orchestrator | 2026-01-02 02:05:18 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:05:18.245860 | orchestrator | 2026-01-02 02:05:18 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:05:21.287403 | orchestrator | 2026-01-02 02:05:21 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:05:21.287762 | orchestrator | 2026-01-02 02:05:21 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:05:21.287795 | orchestrator | 2026-01-02 02:05:21 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:05:24.332629 | orchestrator | 2026-01-02 02:05:24 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:05:24.333810 | orchestrator | 2026-01-02 02:05:24 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:05:24.333841 | orchestrator | 2026-01-02 02:05:24 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:05:27.372534 | orchestrator | 2026-01-02 02:05:27 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:05:27.374605 | orchestrator | 2026-01-02 02:05:27 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:05:27.374645 | orchestrator | 2026-01-02 02:05:27 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:05:30.419669 | orchestrator | 2026-01-02 02:05:30 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:05:30.421162 | orchestrator | 2026-01-02 02:05:30 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:05:30.421383 | orchestrator | 2026-01-02 02:05:30 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:05:33.471429 | orchestrator | 2026-01-02 02:05:33 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:05:33.473043 | orchestrator | 2026-01-02 02:05:33 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:05:33.473090 | orchestrator | 2026-01-02 02:05:33 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:05:36.517589 | orchestrator | 2026-01-02 02:05:36 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:05:36.519226 | orchestrator | 2026-01-02 02:05:36 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:05:36.519260 | orchestrator | 2026-01-02 02:05:36 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:05:39.560695 | orchestrator | 2026-01-02 02:05:39 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:05:39.562378 | orchestrator | 2026-01-02 02:05:39 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:05:39.562495 | orchestrator | 2026-01-02 02:05:39 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:05:42.606196 | orchestrator | 2026-01-02 02:05:42 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:05:42.608525 | orchestrator | 2026-01-02 02:05:42 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:05:42.608565 | orchestrator | 2026-01-02 02:05:42 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:05:45.653768 | orchestrator | 2026-01-02 02:05:45 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:05:45.655359 | orchestrator | 2026-01-02 02:05:45 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:05:45.655429 | orchestrator | 2026-01-02 02:05:45 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:05:48.701144 | orchestrator | 2026-01-02 02:05:48 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:05:48.703308 | orchestrator | 2026-01-02 02:05:48 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:05:48.703365 | orchestrator | 2026-01-02 02:05:48 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:05:51.751587 | orchestrator | 2026-01-02 02:05:51 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:05:51.752385 | orchestrator | 2026-01-02 02:05:51 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:05:51.752487 | orchestrator | 2026-01-02 02:05:51 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:05:54.797576 | orchestrator | 2026-01-02 02:05:54 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:05:54.799005 | orchestrator | 2026-01-02 02:05:54 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:05:54.799267 | orchestrator | 2026-01-02 02:05:54 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:05:57.844038 | orchestrator | 2026-01-02 02:05:57 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:05:57.845645 | orchestrator | 2026-01-02 02:05:57 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:05:57.845768 | orchestrator | 2026-01-02 02:05:57 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:06:00.894486 | orchestrator | 2026-01-02 02:06:00 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:06:00.896027 | orchestrator | 2026-01-02 02:06:00 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:06:00.896131 | orchestrator | 2026-01-02 02:06:00 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:06:03.948078 | orchestrator | 2026-01-02 02:06:03 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:06:03.949308 | orchestrator | 2026-01-02 02:06:03 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:06:03.949333 | orchestrator | 2026-01-02 02:06:03 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:06:06.994663 | orchestrator | 2026-01-02 02:06:06 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:06:06.996779 | orchestrator | 2026-01-02 02:06:06 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:06:06.997191 | orchestrator | 2026-01-02 02:06:06 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:06:10.041738 | orchestrator | 2026-01-02 02:06:10 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:06:10.043661 | orchestrator | 2026-01-02 02:06:10 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:06:10.043713 | orchestrator | 2026-01-02 02:06:10 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:06:13.091648 | orchestrator | 2026-01-02 02:06:13 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:06:13.094554 | orchestrator | 2026-01-02 02:06:13 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:06:13.094647 | orchestrator | 2026-01-02 02:06:13 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:06:16.134974 | orchestrator | 2026-01-02 02:06:16 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:06:16.137604 | orchestrator | 2026-01-02 02:06:16 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:06:16.137660 | orchestrator | 2026-01-02 02:06:16 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:06:19.185136 | orchestrator | 2026-01-02 02:06:19 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:06:19.186745 | orchestrator | 2026-01-02 02:06:19 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:06:19.186830 | orchestrator | 2026-01-02 02:06:19 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:06:22.227865 | orchestrator | 2026-01-02 02:06:22 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:06:22.229212 | orchestrator | 2026-01-02 02:06:22 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:06:22.229375 | orchestrator | 2026-01-02 02:06:22 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:06:25.273091 | orchestrator | 2026-01-02 02:06:25 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:06:25.274776 | orchestrator | 2026-01-02 02:06:25 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:06:25.274824 | orchestrator | 2026-01-02 02:06:25 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:06:28.326328 | orchestrator | 2026-01-02 02:06:28 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:06:28.328278 | orchestrator | 2026-01-02 02:06:28 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:06:28.328402 | orchestrator | 2026-01-02 02:06:28 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:06:31.377428 | orchestrator | 2026-01-02 02:06:31 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:06:31.379816 | orchestrator | 2026-01-02 02:06:31 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:06:31.379865 | orchestrator | 2026-01-02 02:06:31 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:06:34.425165 | orchestrator | 2026-01-02 02:06:34 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:06:34.427105 | orchestrator | 2026-01-02 02:06:34 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:06:34.427154 | orchestrator | 2026-01-02 02:06:34 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:06:37.471457 | orchestrator | 2026-01-02 02:06:37 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:06:37.472856 | orchestrator | 2026-01-02 02:06:37 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:06:37.472890 | orchestrator | 2026-01-02 02:06:37 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:06:40.519515 | orchestrator | 2026-01-02 02:06:40 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:06:40.520616 | orchestrator | 2026-01-02 02:06:40 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:06:40.520682 | orchestrator | 2026-01-02 02:06:40 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:06:43.566376 | orchestrator | 2026-01-02 02:06:43 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:06:43.567782 | orchestrator | 2026-01-02 02:06:43 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:06:43.567813 | orchestrator | 2026-01-02 02:06:43 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:06:46.611311 | orchestrator | 2026-01-02 02:06:46 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:06:46.611956 | orchestrator | 2026-01-02 02:06:46 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:06:46.612137 | orchestrator | 2026-01-02 02:06:46 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:06:49.652480 | orchestrator | 2026-01-02 02:06:49 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:06:49.653696 | orchestrator | 2026-01-02 02:06:49 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:06:49.653732 | orchestrator | 2026-01-02 02:06:49 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:06:52.696833 | orchestrator | 2026-01-02 02:06:52 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:06:52.699134 | orchestrator | 2026-01-02 02:06:52 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:06:52.699175 | orchestrator | 2026-01-02 02:06:52 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:06:55.745477 | orchestrator | 2026-01-02 02:06:55 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:06:55.747620 | orchestrator | 2026-01-02 02:06:55 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:06:55.747654 | orchestrator | 2026-01-02 02:06:55 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:06:58.789698 | orchestrator | 2026-01-02 02:06:58 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:06:58.792089 | orchestrator | 2026-01-02 02:06:58 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:06:58.792294 | orchestrator | 2026-01-02 02:06:58 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:07:01.835431 | orchestrator | 2026-01-02 02:07:01 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:07:01.836934 | orchestrator | 2026-01-02 02:07:01 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:07:01.836970 | orchestrator | 2026-01-02 02:07:01 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:07:04.883204 | orchestrator | 2026-01-02 02:07:04 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:07:04.884875 | orchestrator | 2026-01-02 02:07:04 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:07:04.884906 | orchestrator | 2026-01-02 02:07:04 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:07:07.926850 | orchestrator | 2026-01-02 02:07:07 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:07:07.928834 | orchestrator | 2026-01-02 02:07:07 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:07:07.929105 | orchestrator | 2026-01-02 02:07:07 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:07:10.965945 | orchestrator | 2026-01-02 02:07:10 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:07:10.967032 | orchestrator | 2026-01-02 02:07:10 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:07:10.967139 | orchestrator | 2026-01-02 02:07:10 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:07:14.013660 | orchestrator | 2026-01-02 02:07:14 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:07:14.016055 | orchestrator | 2026-01-02 02:07:14 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:07:14.016197 | orchestrator | 2026-01-02 02:07:14 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:07:17.060862 | orchestrator | 2026-01-02 02:07:17 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:07:17.064605 | orchestrator | 2026-01-02 02:07:17 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:07:17.064864 | orchestrator | 2026-01-02 02:07:17 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:07:20.109478 | orchestrator | 2026-01-02 02:07:20 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:07:20.115679 | orchestrator | 2026-01-02 02:07:20 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:07:20.116027 | orchestrator | 2026-01-02 02:07:20 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:07:23.158704 | orchestrator | 2026-01-02 02:07:23 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:07:23.160196 | orchestrator | 2026-01-02 02:07:23 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:07:23.160447 | orchestrator | 2026-01-02 02:07:23 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:07:26.208429 | orchestrator | 2026-01-02 02:07:26 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:07:26.210438 | orchestrator | 2026-01-02 02:07:26 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:07:26.210516 | orchestrator | 2026-01-02 02:07:26 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:07:29.255464 | orchestrator | 2026-01-02 02:07:29 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:07:29.257267 | orchestrator | 2026-01-02 02:07:29 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:07:29.257329 | orchestrator | 2026-01-02 02:07:29 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:07:32.306766 | orchestrator | 2026-01-02 02:07:32 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:07:32.308138 | orchestrator | 2026-01-02 02:07:32 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:07:32.308198 | orchestrator | 2026-01-02 02:07:32 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:07:35.353536 | orchestrator | 2026-01-02 02:07:35 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:07:35.355283 | orchestrator | 2026-01-02 02:07:35 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:07:35.355578 | orchestrator | 2026-01-02 02:07:35 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:07:38.403255 | orchestrator | 2026-01-02 02:07:38 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:07:38.405569 | orchestrator | 2026-01-02 02:07:38 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:07:38.405602 | orchestrator | 2026-01-02 02:07:38 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:07:41.452361 | orchestrator | 2026-01-02 02:07:41 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:07:41.454225 | orchestrator | 2026-01-02 02:07:41 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:07:41.454267 | orchestrator | 2026-01-02 02:07:41 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:07:44.498731 | orchestrator | 2026-01-02 02:07:44 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:07:44.500332 | orchestrator | 2026-01-02 02:07:44 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:07:44.500371 | orchestrator | 2026-01-02 02:07:44 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:07:47.544585 | orchestrator | 2026-01-02 02:07:47 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:07:47.545729 | orchestrator | 2026-01-02 02:07:47 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:07:47.545756 | orchestrator | 2026-01-02 02:07:47 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:07:50.593142 | orchestrator | 2026-01-02 02:07:50 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:07:50.594899 | orchestrator | 2026-01-02 02:07:50 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:07:50.594985 | orchestrator | 2026-01-02 02:07:50 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:07:53.643019 | orchestrator | 2026-01-02 02:07:53 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:07:53.645511 | orchestrator | 2026-01-02 02:07:53 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:07:53.645568 | orchestrator | 2026-01-02 02:07:53 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:07:56.689323 | orchestrator | 2026-01-02 02:07:56 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:07:56.691008 | orchestrator | 2026-01-02 02:07:56 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:07:56.691128 | orchestrator | 2026-01-02 02:07:56 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:07:59.737530 | orchestrator | 2026-01-02 02:07:59 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:07:59.739911 | orchestrator | 2026-01-02 02:07:59 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:07:59.739967 | orchestrator | 2026-01-02 02:07:59 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:08:02.785773 | orchestrator | 2026-01-02 02:08:02 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:08:02.787195 | orchestrator | 2026-01-02 02:08:02 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:08:02.787232 | orchestrator | 2026-01-02 02:08:02 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:08:05.840086 | orchestrator | 2026-01-02 02:08:05 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:08:05.841768 | orchestrator | 2026-01-02 02:08:05 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:08:05.841893 | orchestrator | 2026-01-02 02:08:05 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:08:08.884505 | orchestrator | 2026-01-02 02:08:08 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:08:08.885722 | orchestrator | 2026-01-02 02:08:08 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:08:08.885874 | orchestrator | 2026-01-02 02:08:08 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:08:11.930376 | orchestrator | 2026-01-02 02:08:11 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:08:11.931915 | orchestrator | 2026-01-02 02:08:11 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:08:11.932031 | orchestrator | 2026-01-02 02:08:11 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:08:14.976672 | orchestrator | 2026-01-02 02:08:14 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:08:14.977661 | orchestrator | 2026-01-02 02:08:14 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:08:14.977978 | orchestrator | 2026-01-02 02:08:14 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:08:18.033373 | orchestrator | 2026-01-02 02:08:18 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:08:18.036678 | orchestrator | 2026-01-02 02:08:18 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:08:18.036762 | orchestrator | 2026-01-02 02:08:18 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:08:21.075423 | orchestrator | 2026-01-02 02:08:21 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:08:21.077076 | orchestrator | 2026-01-02 02:08:21 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:08:21.077272 | orchestrator | 2026-01-02 02:08:21 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:08:24.114654 | orchestrator | 2026-01-02 02:08:24 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:08:24.116455 | orchestrator | 2026-01-02 02:08:24 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:08:24.116525 | orchestrator | 2026-01-02 02:08:24 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:08:27.163749 | orchestrator | 2026-01-02 02:08:27 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:08:27.165729 | orchestrator | 2026-01-02 02:08:27 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:08:27.166522 | orchestrator | 2026-01-02 02:08:27 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:08:30.212640 | orchestrator | 2026-01-02 02:08:30 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:08:30.217272 | orchestrator | 2026-01-02 02:08:30 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:08:30.217338 | orchestrator | 2026-01-02 02:08:30 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:08:33.261294 | orchestrator | 2026-01-02 02:08:33 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:08:33.262914 | orchestrator | 2026-01-02 02:08:33 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:08:33.263165 | orchestrator | 2026-01-02 02:08:33 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:08:36.307221 | orchestrator | 2026-01-02 02:08:36 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:08:36.309761 | orchestrator | 2026-01-02 02:08:36 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:08:36.309809 | orchestrator | 2026-01-02 02:08:36 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:08:39.356666 | orchestrator | 2026-01-02 02:08:39 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:08:39.359085 | orchestrator | 2026-01-02 02:08:39 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:08:39.359173 | orchestrator | 2026-01-02 02:08:39 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:08:42.401859 | orchestrator | 2026-01-02 02:08:42 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:08:42.403775 | orchestrator | 2026-01-02 02:08:42 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:08:42.403818 | orchestrator | 2026-01-02 02:08:42 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:08:45.448389 | orchestrator | 2026-01-02 02:08:45 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:08:45.449619 | orchestrator | 2026-01-02 02:08:45 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:08:45.449792 | orchestrator | 2026-01-02 02:08:45 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:08:48.502265 | orchestrator | 2026-01-02 02:08:48 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:08:48.504219 | orchestrator | 2026-01-02 02:08:48 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:08:48.504387 | orchestrator | 2026-01-02 02:08:48 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:08:51.543595 | orchestrator | 2026-01-02 02:08:51 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:08:51.545207 | orchestrator | 2026-01-02 02:08:51 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:08:51.545261 | orchestrator | 2026-01-02 02:08:51 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:08:54.590297 | orchestrator | 2026-01-02 02:08:54 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:08:54.591256 | orchestrator | 2026-01-02 02:08:54 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:08:54.591299 | orchestrator | 2026-01-02 02:08:54 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:08:57.633820 | orchestrator | 2026-01-02 02:08:57 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:08:57.635823 | orchestrator | 2026-01-02 02:08:57 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:08:57.635867 | orchestrator | 2026-01-02 02:08:57 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:09:00.681213 | orchestrator | 2026-01-02 02:09:00 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:09:00.683545 | orchestrator | 2026-01-02 02:09:00 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:09:00.683727 | orchestrator | 2026-01-02 02:09:00 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:09:03.729800 | orchestrator | 2026-01-02 02:09:03 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:09:03.731431 | orchestrator | 2026-01-02 02:09:03 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:09:03.731512 | orchestrator | 2026-01-02 02:09:03 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:09:06.779096 | orchestrator | 2026-01-02 02:09:06 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:09:06.780492 | orchestrator | 2026-01-02 02:09:06 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:09:06.780546 | orchestrator | 2026-01-02 02:09:06 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:09:09.825054 | orchestrator | 2026-01-02 02:09:09 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:09:09.827675 | orchestrator | 2026-01-02 02:09:09 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:09:09.827730 | orchestrator | 2026-01-02 02:09:09 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:09:12.871747 | orchestrator | 2026-01-02 02:09:12 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:09:12.873719 | orchestrator | 2026-01-02 02:09:12 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:09:12.873829 | orchestrator | 2026-01-02 02:09:12 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:09:15.915854 | orchestrator | 2026-01-02 02:09:15 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:09:15.917535 | orchestrator | 2026-01-02 02:09:15 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:09:15.917575 | orchestrator | 2026-01-02 02:09:15 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:09:18.961494 | orchestrator | 2026-01-02 02:09:18 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:09:18.962495 | orchestrator | 2026-01-02 02:09:18 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:09:18.962554 | orchestrator | 2026-01-02 02:09:18 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:09:22.000397 | orchestrator | 2026-01-02 02:09:22 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:09:22.013901 | orchestrator | 2026-01-02 02:09:22 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:09:22.013979 | orchestrator | 2026-01-02 02:09:22 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:09:25.058347 | orchestrator | 2026-01-02 02:09:25 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:09:25.060050 | orchestrator | 2026-01-02 02:09:25 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:09:25.060143 | orchestrator | 2026-01-02 02:09:25 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:09:28.105596 | orchestrator | 2026-01-02 02:09:28 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:09:28.107233 | orchestrator | 2026-01-02 02:09:28 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:09:28.107335 | orchestrator | 2026-01-02 02:09:28 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:09:31.157099 | orchestrator | 2026-01-02 02:09:31 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:09:31.160187 | orchestrator | 2026-01-02 02:09:31 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:09:31.160246 | orchestrator | 2026-01-02 02:09:31 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:09:34.208545 | orchestrator | 2026-01-02 02:09:34 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:09:34.209846 | orchestrator | 2026-01-02 02:09:34 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:09:34.209896 | orchestrator | 2026-01-02 02:09:34 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:09:37.253338 | orchestrator | 2026-01-02 02:09:37 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:09:37.255475 | orchestrator | 2026-01-02 02:09:37 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:09:37.255505 | orchestrator | 2026-01-02 02:09:37 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:09:40.305749 | orchestrator | 2026-01-02 02:09:40 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:09:40.308175 | orchestrator | 2026-01-02 02:09:40 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:09:40.308269 | orchestrator | 2026-01-02 02:09:40 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:09:43.353973 | orchestrator | 2026-01-02 02:09:43 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:09:43.355901 | orchestrator | 2026-01-02 02:09:43 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:09:43.355959 | orchestrator | 2026-01-02 02:09:43 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:09:46.397283 | orchestrator | 2026-01-02 02:09:46 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:09:46.398555 | orchestrator | 2026-01-02 02:09:46 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:09:46.398665 | orchestrator | 2026-01-02 02:09:46 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:09:49.440100 | orchestrator | 2026-01-02 02:09:49 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:09:49.440907 | orchestrator | 2026-01-02 02:09:49 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:09:49.440955 | orchestrator | 2026-01-02 02:09:49 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:09:52.488326 | orchestrator | 2026-01-02 02:09:52 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:09:52.490837 | orchestrator | 2026-01-02 02:09:52 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:09:52.490938 | orchestrator | 2026-01-02 02:09:52 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:09:55.534251 | orchestrator | 2026-01-02 02:09:55 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:09:55.536070 | orchestrator | 2026-01-02 02:09:55 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:09:55.536110 | orchestrator | 2026-01-02 02:09:55 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:09:58.585877 | orchestrator | 2026-01-02 02:09:58 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:09:58.587768 | orchestrator | 2026-01-02 02:09:58 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:09:58.587902 | orchestrator | 2026-01-02 02:09:58 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:10:01.631695 | orchestrator | 2026-01-02 02:10:01 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:10:01.632785 | orchestrator | 2026-01-02 02:10:01 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:10:01.633019 | orchestrator | 2026-01-02 02:10:01 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:10:04.679622 | orchestrator | 2026-01-02 02:10:04 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:10:04.680741 | orchestrator | 2026-01-02 02:10:04 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:10:04.680914 | orchestrator | 2026-01-02 02:10:04 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:10:07.728062 | orchestrator | 2026-01-02 02:10:07 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:10:07.728578 | orchestrator | 2026-01-02 02:10:07 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:10:07.728686 | orchestrator | 2026-01-02 02:10:07 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:10:10.773261 | orchestrator | 2026-01-02 02:10:10 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:10:10.774385 | orchestrator | 2026-01-02 02:10:10 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:10:10.774418 | orchestrator | 2026-01-02 02:10:10 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:10:13.821371 | orchestrator | 2026-01-02 02:10:13 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:10:13.823574 | orchestrator | 2026-01-02 02:10:13 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:10:13.823798 | orchestrator | 2026-01-02 02:10:13 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:10:16.859398 | orchestrator | 2026-01-02 02:10:16 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:10:16.860424 | orchestrator | 2026-01-02 02:10:16 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:10:16.860460 | orchestrator | 2026-01-02 02:10:16 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:10:19.904379 | orchestrator | 2026-01-02 02:10:19 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:10:19.905754 | orchestrator | 2026-01-02 02:10:19 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:10:19.905782 | orchestrator | 2026-01-02 02:10:19 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:10:22.945470 | orchestrator | 2026-01-02 02:10:22 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:10:22.948040 | orchestrator | 2026-01-02 02:10:22 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:10:22.948115 | orchestrator | 2026-01-02 02:10:22 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:10:25.993173 | orchestrator | 2026-01-02 02:10:25 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:10:25.995214 | orchestrator | 2026-01-02 02:10:25 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:10:25.995343 | orchestrator | 2026-01-02 02:10:25 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:10:29.053649 | orchestrator | 2026-01-02 02:10:29 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:10:29.055404 | orchestrator | 2026-01-02 02:10:29 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:10:29.055592 | orchestrator | 2026-01-02 02:10:29 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:10:32.095453 | orchestrator | 2026-01-02 02:10:32 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:10:32.095984 | orchestrator | 2026-01-02 02:10:32 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:10:32.096061 | orchestrator | 2026-01-02 02:10:32 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:10:35.151588 | orchestrator | 2026-01-02 02:10:35 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:10:35.153077 | orchestrator | 2026-01-02 02:10:35 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:10:35.153121 | orchestrator | 2026-01-02 02:10:35 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:10:38.198907 | orchestrator | 2026-01-02 02:10:38 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:10:38.201373 | orchestrator | 2026-01-02 02:10:38 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:10:38.201401 | orchestrator | 2026-01-02 02:10:38 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:10:41.242985 | orchestrator | 2026-01-02 02:10:41 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:10:41.243912 | orchestrator | 2026-01-02 02:10:41 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:10:41.243969 | orchestrator | 2026-01-02 02:10:41 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:10:44.292219 | orchestrator | 2026-01-02 02:10:44 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:10:44.294177 | orchestrator | 2026-01-02 02:10:44 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:10:44.294221 | orchestrator | 2026-01-02 02:10:44 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:10:47.339996 | orchestrator | 2026-01-02 02:10:47 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:10:47.341470 | orchestrator | 2026-01-02 02:10:47 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:10:47.341512 | orchestrator | 2026-01-02 02:10:47 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:10:50.388810 | orchestrator | 2026-01-02 02:10:50 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:10:50.389958 | orchestrator | 2026-01-02 02:10:50 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:10:50.389995 | orchestrator | 2026-01-02 02:10:50 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:10:53.435113 | orchestrator | 2026-01-02 02:10:53 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:10:53.437665 | orchestrator | 2026-01-02 02:10:53 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:10:53.437702 | orchestrator | 2026-01-02 02:10:53 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:10:56.481787 | orchestrator | 2026-01-02 02:10:56 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:10:56.482875 | orchestrator | 2026-01-02 02:10:56 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:10:56.482927 | orchestrator | 2026-01-02 02:10:56 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:10:59.528471 | orchestrator | 2026-01-02 02:10:59 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:10:59.530173 | orchestrator | 2026-01-02 02:10:59 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:10:59.530233 | orchestrator | 2026-01-02 02:10:59 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:11:02.573615 | orchestrator | 2026-01-02 02:11:02 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:11:02.575793 | orchestrator | 2026-01-02 02:11:02 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:11:02.575855 | orchestrator | 2026-01-02 02:11:02 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:11:05.623321 | orchestrator | 2026-01-02 02:11:05 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:11:05.624710 | orchestrator | 2026-01-02 02:11:05 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:11:05.624854 | orchestrator | 2026-01-02 02:11:05 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:11:08.669921 | orchestrator | 2026-01-02 02:11:08 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:11:08.671612 | orchestrator | 2026-01-02 02:11:08 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:11:08.671646 | orchestrator | 2026-01-02 02:11:08 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:11:11.713389 | orchestrator | 2026-01-02 02:11:11 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:11:11.715343 | orchestrator | 2026-01-02 02:11:11 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:11:11.715472 | orchestrator | 2026-01-02 02:11:11 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:11:14.762851 | orchestrator | 2026-01-02 02:11:14 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:11:14.764698 | orchestrator | 2026-01-02 02:11:14 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:11:14.764785 | orchestrator | 2026-01-02 02:11:14 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:11:17.808506 | orchestrator | 2026-01-02 02:11:17 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:11:17.809910 | orchestrator | 2026-01-02 02:11:17 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:11:17.809984 | orchestrator | 2026-01-02 02:11:17 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:11:20.863208 | orchestrator | 2026-01-02 02:11:20 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:11:20.864403 | orchestrator | 2026-01-02 02:11:20 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:11:20.864622 | orchestrator | 2026-01-02 02:11:20 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:11:23.909291 | orchestrator | 2026-01-02 02:11:23 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:11:23.913281 | orchestrator | 2026-01-02 02:11:23 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:11:23.913334 | orchestrator | 2026-01-02 02:11:23 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:11:26.962182 | orchestrator | 2026-01-02 02:11:26 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:11:26.964437 | orchestrator | 2026-01-02 02:11:26 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:11:26.964462 | orchestrator | 2026-01-02 02:11:26 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:11:30.007107 | orchestrator | 2026-01-02 02:11:30 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:11:30.009774 | orchestrator | 2026-01-02 02:11:30 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:11:30.009881 | orchestrator | 2026-01-02 02:11:30 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:11:33.053585 | orchestrator | 2026-01-02 02:11:33 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:11:33.055554 | orchestrator | 2026-01-02 02:11:33 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:11:33.055599 | orchestrator | 2026-01-02 02:11:33 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:11:36.101296 | orchestrator | 2026-01-02 02:11:36 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:11:36.103042 | orchestrator | 2026-01-02 02:11:36 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:11:36.103195 | orchestrator | 2026-01-02 02:11:36 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:11:39.145726 | orchestrator | 2026-01-02 02:11:39 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:11:39.147104 | orchestrator | 2026-01-02 02:11:39 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:11:39.147171 | orchestrator | 2026-01-02 02:11:39 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:11:42.189830 | orchestrator | 2026-01-02 02:11:42 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:11:42.192626 | orchestrator | 2026-01-02 02:11:42 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:11:42.192669 | orchestrator | 2026-01-02 02:11:42 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:11:45.236998 | orchestrator | 2026-01-02 02:11:45 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:11:45.239223 | orchestrator | 2026-01-02 02:11:45 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:11:45.239687 | orchestrator | 2026-01-02 02:11:45 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:11:48.284778 | orchestrator | 2026-01-02 02:11:48 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:11:48.288223 | orchestrator | 2026-01-02 02:11:48 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:11:48.288268 | orchestrator | 2026-01-02 02:11:48 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:11:51.336460 | orchestrator | 2026-01-02 02:11:51 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:11:51.339390 | orchestrator | 2026-01-02 02:11:51 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:11:51.339606 | orchestrator | 2026-01-02 02:11:51 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:11:54.388140 | orchestrator | 2026-01-02 02:11:54 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:11:54.390889 | orchestrator | 2026-01-02 02:11:54 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:11:54.390970 | orchestrator | 2026-01-02 02:11:54 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:11:57.437183 | orchestrator | 2026-01-02 02:11:57 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:11:57.440834 | orchestrator | 2026-01-02 02:11:57 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:11:57.440892 | orchestrator | 2026-01-02 02:11:57 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:12:00.489981 | orchestrator | 2026-01-02 02:12:00 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:12:00.492039 | orchestrator | 2026-01-02 02:12:00 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:12:00.492102 | orchestrator | 2026-01-02 02:12:00 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:12:03.541102 | orchestrator | 2026-01-02 02:12:03 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:12:03.541845 | orchestrator | 2026-01-02 02:12:03 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:12:03.541923 | orchestrator | 2026-01-02 02:12:03 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:12:06.587559 | orchestrator | 2026-01-02 02:12:06 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:12:06.589614 | orchestrator | 2026-01-02 02:12:06 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:12:06.589711 | orchestrator | 2026-01-02 02:12:06 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:12:09.638487 | orchestrator | 2026-01-02 02:12:09 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:12:09.639773 | orchestrator | 2026-01-02 02:12:09 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:12:09.639813 | orchestrator | 2026-01-02 02:12:09 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:12:12.686939 | orchestrator | 2026-01-02 02:12:12 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:12:12.688986 | orchestrator | 2026-01-02 02:12:12 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:12:12.689021 | orchestrator | 2026-01-02 02:12:12 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:12:15.738002 | orchestrator | 2026-01-02 02:12:15 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:12:15.740989 | orchestrator | 2026-01-02 02:12:15 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:12:15.741402 | orchestrator | 2026-01-02 02:12:15 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:12:18.787171 | orchestrator | 2026-01-02 02:12:18 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:12:18.787759 | orchestrator | 2026-01-02 02:12:18 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:12:18.787777 | orchestrator | 2026-01-02 02:12:18 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:12:21.833584 | orchestrator | 2026-01-02 02:12:21 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:12:21.837120 | orchestrator | 2026-01-02 02:12:21 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:12:21.837161 | orchestrator | 2026-01-02 02:12:21 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:12:24.883676 | orchestrator | 2026-01-02 02:12:24 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:12:24.886107 | orchestrator | 2026-01-02 02:12:24 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:12:24.886198 | orchestrator | 2026-01-02 02:12:24 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:12:27.936036 | orchestrator | 2026-01-02 02:12:27 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:12:27.939931 | orchestrator | 2026-01-02 02:12:27 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:12:27.940126 | orchestrator | 2026-01-02 02:12:27 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:12:30.987132 | orchestrator | 2026-01-02 02:12:30 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:12:30.989319 | orchestrator | 2026-01-02 02:12:30 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:12:30.989585 | orchestrator | 2026-01-02 02:12:30 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:12:34.048190 | orchestrator | 2026-01-02 02:12:34 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:12:34.049706 | orchestrator | 2026-01-02 02:12:34 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:12:34.049801 | orchestrator | 2026-01-02 02:12:34 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:12:37.096424 | orchestrator | 2026-01-02 02:12:37 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:12:37.098784 | orchestrator | 2026-01-02 02:12:37 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:12:37.098849 | orchestrator | 2026-01-02 02:12:37 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:12:40.151035 | orchestrator | 2026-01-02 02:12:40 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:12:40.152900 | orchestrator | 2026-01-02 02:12:40 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:12:40.152941 | orchestrator | 2026-01-02 02:12:40 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:12:43.204064 | orchestrator | 2026-01-02 02:12:43 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:12:43.206156 | orchestrator | 2026-01-02 02:12:43 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:12:43.206190 | orchestrator | 2026-01-02 02:12:43 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:12:46.250320 | orchestrator | 2026-01-02 02:12:46 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:12:46.252079 | orchestrator | 2026-01-02 02:12:46 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:12:46.252122 | orchestrator | 2026-01-02 02:12:46 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:12:49.298845 | orchestrator | 2026-01-02 02:12:49 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:12:49.300712 | orchestrator | 2026-01-02 02:12:49 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:12:49.300963 | orchestrator | 2026-01-02 02:12:49 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:12:52.357940 | orchestrator | 2026-01-02 02:12:52 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:12:52.360716 | orchestrator | 2026-01-02 02:12:52 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:12:52.360775 | orchestrator | 2026-01-02 02:12:52 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:12:55.415931 | orchestrator | 2026-01-02 02:12:55 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:12:55.416801 | orchestrator | 2026-01-02 02:12:55 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:12:55.416830 | orchestrator | 2026-01-02 02:12:55 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:12:58.467436 | orchestrator | 2026-01-02 02:12:58 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:12:58.469858 | orchestrator | 2026-01-02 02:12:58 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:12:58.469903 | orchestrator | 2026-01-02 02:12:58 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:13:01.514569 | orchestrator | 2026-01-02 02:13:01 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:13:01.516740 | orchestrator | 2026-01-02 02:13:01 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:13:01.516777 | orchestrator | 2026-01-02 02:13:01 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:13:04.560931 | orchestrator | 2026-01-02 02:13:04 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:13:04.564444 | orchestrator | 2026-01-02 02:13:04 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:13:04.564542 | orchestrator | 2026-01-02 02:13:04 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:13:07.613193 | orchestrator | 2026-01-02 02:13:07 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:13:07.615623 | orchestrator | 2026-01-02 02:13:07 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:13:07.615675 | orchestrator | 2026-01-02 02:13:07 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:13:10.670555 | orchestrator | 2026-01-02 02:13:10 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:13:10.671478 | orchestrator | 2026-01-02 02:13:10 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:13:10.671523 | orchestrator | 2026-01-02 02:13:10 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:13:13.713896 | orchestrator | 2026-01-02 02:13:13 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:13:13.716761 | orchestrator | 2026-01-02 02:13:13 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:13:13.716794 | orchestrator | 2026-01-02 02:13:13 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:13:16.762367 | orchestrator | 2026-01-02 02:13:16 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:13:16.766291 | orchestrator | 2026-01-02 02:13:16 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:13:16.766457 | orchestrator | 2026-01-02 02:13:16 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:13:19.807014 | orchestrator | 2026-01-02 02:13:19 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:13:19.808494 | orchestrator | 2026-01-02 02:13:19 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:13:19.808545 | orchestrator | 2026-01-02 02:13:19 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:13:22.856198 | orchestrator | 2026-01-02 02:13:22 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:13:22.857472 | orchestrator | 2026-01-02 02:13:22 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:13:22.857534 | orchestrator | 2026-01-02 02:13:22 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:13:25.905581 | orchestrator | 2026-01-02 02:13:25 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:13:25.906652 | orchestrator | 2026-01-02 02:13:25 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:13:25.906694 | orchestrator | 2026-01-02 02:13:25 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:13:28.948607 | orchestrator | 2026-01-02 02:13:28 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:13:28.950415 | orchestrator | 2026-01-02 02:13:28 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:13:28.950588 | orchestrator | 2026-01-02 02:13:28 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:13:31.998366 | orchestrator | 2026-01-02 02:13:31 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:13:32.003925 | orchestrator | 2026-01-02 02:13:32 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:13:32.004004 | orchestrator | 2026-01-02 02:13:32 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:13:35.046085 | orchestrator | 2026-01-02 02:13:35 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:13:35.047280 | orchestrator | 2026-01-02 02:13:35 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:13:35.047308 | orchestrator | 2026-01-02 02:13:35 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:13:38.089245 | orchestrator | 2026-01-02 02:13:38 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:13:38.091016 | orchestrator | 2026-01-02 02:13:38 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:13:38.091370 | orchestrator | 2026-01-02 02:13:38 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:13:41.140440 | orchestrator | 2026-01-02 02:13:41 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:13:41.141893 | orchestrator | 2026-01-02 02:13:41 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:13:41.142192 | orchestrator | 2026-01-02 02:13:41 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:13:44.185120 | orchestrator | 2026-01-02 02:13:44 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:13:44.186665 | orchestrator | 2026-01-02 02:13:44 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:13:44.186850 | orchestrator | 2026-01-02 02:13:44 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:13:47.229534 | orchestrator | 2026-01-02 02:13:47 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:13:47.231023 | orchestrator | 2026-01-02 02:13:47 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:13:47.231053 | orchestrator | 2026-01-02 02:13:47 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:13:50.280841 | orchestrator | 2026-01-02 02:13:50 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:13:50.282195 | orchestrator | 2026-01-02 02:13:50 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:13:50.282246 | orchestrator | 2026-01-02 02:13:50 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:13:53.328795 | orchestrator | 2026-01-02 02:13:53 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:13:53.330531 | orchestrator | 2026-01-02 02:13:53 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:13:53.330598 | orchestrator | 2026-01-02 02:13:53 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:13:56.377420 | orchestrator | 2026-01-02 02:13:56 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:13:56.379347 | orchestrator | 2026-01-02 02:13:56 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:13:56.379412 | orchestrator | 2026-01-02 02:13:56 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:13:59.421595 | orchestrator | 2026-01-02 02:13:59 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:13:59.423725 | orchestrator | 2026-01-02 02:13:59 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:13:59.423932 | orchestrator | 2026-01-02 02:13:59 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:14:02.467937 | orchestrator | 2026-01-02 02:14:02 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:14:02.468640 | orchestrator | 2026-01-02 02:14:02 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:14:02.468700 | orchestrator | 2026-01-02 02:14:02 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:14:05.512252 | orchestrator | 2026-01-02 02:14:05 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:14:05.513908 | orchestrator | 2026-01-02 02:14:05 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:14:05.513943 | orchestrator | 2026-01-02 02:14:05 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:14:08.553947 | orchestrator | 2026-01-02 02:14:08 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:14:08.556453 | orchestrator | 2026-01-02 02:14:08 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:14:08.556502 | orchestrator | 2026-01-02 02:14:08 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:14:11.599122 | orchestrator | 2026-01-02 02:14:11 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:14:11.600181 | orchestrator | 2026-01-02 02:14:11 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:14:11.600216 | orchestrator | 2026-01-02 02:14:11 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:14:14.646321 | orchestrator | 2026-01-02 02:14:14 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:14:14.647995 | orchestrator | 2026-01-02 02:14:14 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:14:14.648503 | orchestrator | 2026-01-02 02:14:14 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:14:17.695521 | orchestrator | 2026-01-02 02:14:17 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:14:17.696237 | orchestrator | 2026-01-02 02:14:17 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:14:17.696285 | orchestrator | 2026-01-02 02:14:17 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:14:20.745845 | orchestrator | 2026-01-02 02:14:20 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:14:20.746454 | orchestrator | 2026-01-02 02:14:20 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:14:20.746488 | orchestrator | 2026-01-02 02:14:20 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:14:23.795676 | orchestrator | 2026-01-02 02:14:23 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:14:23.797746 | orchestrator | 2026-01-02 02:14:23 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:14:23.797855 | orchestrator | 2026-01-02 02:14:23 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:14:26.845422 | orchestrator | 2026-01-02 02:14:26 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:14:26.846755 | orchestrator | 2026-01-02 02:14:26 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:14:26.846910 | orchestrator | 2026-01-02 02:14:26 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:14:29.889863 | orchestrator | 2026-01-02 02:14:29 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:14:29.891321 | orchestrator | 2026-01-02 02:14:29 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:14:29.891567 | orchestrator | 2026-01-02 02:14:29 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:14:32.935310 | orchestrator | 2026-01-02 02:14:32 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:14:32.936239 | orchestrator | 2026-01-02 02:14:32 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:14:32.936316 | orchestrator | 2026-01-02 02:14:32 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:14:35.981887 | orchestrator | 2026-01-02 02:14:35 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:14:35.983438 | orchestrator | 2026-01-02 02:14:35 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:14:35.983602 | orchestrator | 2026-01-02 02:14:35 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:14:39.026332 | orchestrator | 2026-01-02 02:14:39 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:14:39.027697 | orchestrator | 2026-01-02 02:14:39 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:14:39.027727 | orchestrator | 2026-01-02 02:14:39 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:14:42.075846 | orchestrator | 2026-01-02 02:14:42 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:14:42.075958 | orchestrator | 2026-01-02 02:14:42 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:14:42.075975 | orchestrator | 2026-01-02 02:14:42 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:14:45.122257 | orchestrator | 2026-01-02 02:14:45 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:14:45.124021 | orchestrator | 2026-01-02 02:14:45 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:14:45.124059 | orchestrator | 2026-01-02 02:14:45 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:14:48.170996 | orchestrator | 2026-01-02 02:14:48 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:14:48.172216 | orchestrator | 2026-01-02 02:14:48 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:14:48.172259 | orchestrator | 2026-01-02 02:14:48 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:14:51.215109 | orchestrator | 2026-01-02 02:14:51 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:14:51.216723 | orchestrator | 2026-01-02 02:14:51 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:14:51.216759 | orchestrator | 2026-01-02 02:14:51 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:14:54.264110 | orchestrator | 2026-01-02 02:14:54 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:14:54.265505 | orchestrator | 2026-01-02 02:14:54 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:14:54.265534 | orchestrator | 2026-01-02 02:14:54 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:14:57.313982 | orchestrator | 2026-01-02 02:14:57 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:14:57.315777 | orchestrator | 2026-01-02 02:14:57 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:14:57.316299 | orchestrator | 2026-01-02 02:14:57 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:15:00.357983 | orchestrator | 2026-01-02 02:15:00 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:15:00.359528 | orchestrator | 2026-01-02 02:15:00 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:15:00.359580 | orchestrator | 2026-01-02 02:15:00 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:15:03.402393 | orchestrator | 2026-01-02 02:15:03 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:15:03.403566 | orchestrator | 2026-01-02 02:15:03 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:15:03.403599 | orchestrator | 2026-01-02 02:15:03 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:15:06.446451 | orchestrator | 2026-01-02 02:15:06 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:15:06.447919 | orchestrator | 2026-01-02 02:15:06 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:15:06.447940 | orchestrator | 2026-01-02 02:15:06 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:15:09.489709 | orchestrator | 2026-01-02 02:15:09 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:15:09.491733 | orchestrator | 2026-01-02 02:15:09 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:15:09.491786 | orchestrator | 2026-01-02 02:15:09 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:15:12.534955 | orchestrator | 2026-01-02 02:15:12 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:15:12.536277 | orchestrator | 2026-01-02 02:15:12 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:15:12.536328 | orchestrator | 2026-01-02 02:15:12 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:15:15.581243 | orchestrator | 2026-01-02 02:15:15 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:15:15.582506 | orchestrator | 2026-01-02 02:15:15 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:15:15.582641 | orchestrator | 2026-01-02 02:15:15 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:15:18.629331 | orchestrator | 2026-01-02 02:15:18 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:15:18.631234 | orchestrator | 2026-01-02 02:15:18 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:15:18.631401 | orchestrator | 2026-01-02 02:15:18 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:15:21.678398 | orchestrator | 2026-01-02 02:15:21 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:15:21.679091 | orchestrator | 2026-01-02 02:15:21 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:15:21.679234 | orchestrator | 2026-01-02 02:15:21 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:15:24.729638 | orchestrator | 2026-01-02 02:15:24 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:15:24.732383 | orchestrator | 2026-01-02 02:15:24 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:15:24.732436 | orchestrator | 2026-01-02 02:15:24 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:15:27.779417 | orchestrator | 2026-01-02 02:15:27 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:15:27.781557 | orchestrator | 2026-01-02 02:15:27 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:15:27.781594 | orchestrator | 2026-01-02 02:15:27 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:15:30.829653 | orchestrator | 2026-01-02 02:15:30 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:15:30.831759 | orchestrator | 2026-01-02 02:15:30 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:15:30.831862 | orchestrator | 2026-01-02 02:15:30 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:15:33.877917 | orchestrator | 2026-01-02 02:15:33 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:15:33.879628 | orchestrator | 2026-01-02 02:15:33 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:15:33.879764 | orchestrator | 2026-01-02 02:15:33 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:15:36.925526 | orchestrator | 2026-01-02 02:15:36 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:15:36.927733 | orchestrator | 2026-01-02 02:15:36 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:15:36.927781 | orchestrator | 2026-01-02 02:15:36 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:15:39.971739 | orchestrator | 2026-01-02 02:15:39 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:15:39.972477 | orchestrator | 2026-01-02 02:15:39 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:15:39.972609 | orchestrator | 2026-01-02 02:15:39 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:15:43.029604 | orchestrator | 2026-01-02 02:15:43 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:15:43.030302 | orchestrator | 2026-01-02 02:15:43 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:15:43.030341 | orchestrator | 2026-01-02 02:15:43 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:15:46.077254 | orchestrator | 2026-01-02 02:15:46 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:15:46.078804 | orchestrator | 2026-01-02 02:15:46 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:15:46.078998 | orchestrator | 2026-01-02 02:15:46 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:15:49.120449 | orchestrator | 2026-01-02 02:15:49 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:15:49.121839 | orchestrator | 2026-01-02 02:15:49 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:15:49.121919 | orchestrator | 2026-01-02 02:15:49 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:15:52.165109 | orchestrator | 2026-01-02 02:15:52 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:15:52.165393 | orchestrator | 2026-01-02 02:15:52 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:15:52.165427 | orchestrator | 2026-01-02 02:15:52 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:15:55.208011 | orchestrator | 2026-01-02 02:15:55 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:15:55.208080 | orchestrator | 2026-01-02 02:15:55 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:15:55.208116 | orchestrator | 2026-01-02 02:15:55 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:15:58.253916 | orchestrator | 2026-01-02 02:15:58 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:15:58.255068 | orchestrator | 2026-01-02 02:15:58 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:15:58.255451 | orchestrator | 2026-01-02 02:15:58 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:16:01.302881 | orchestrator | 2026-01-02 02:16:01 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:16:01.305438 | orchestrator | 2026-01-02 02:16:01 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:16:01.305685 | orchestrator | 2026-01-02 02:16:01 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:16:04.351936 | orchestrator | 2026-01-02 02:16:04 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:16:04.353521 | orchestrator | 2026-01-02 02:16:04 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:16:04.353573 | orchestrator | 2026-01-02 02:16:04 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:16:07.398445 | orchestrator | 2026-01-02 02:16:07 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:16:07.400787 | orchestrator | 2026-01-02 02:16:07 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:16:07.400822 | orchestrator | 2026-01-02 02:16:07 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:16:10.447474 | orchestrator | 2026-01-02 02:16:10 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:16:10.448848 | orchestrator | 2026-01-02 02:16:10 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:16:10.448916 | orchestrator | 2026-01-02 02:16:10 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:16:13.492920 | orchestrator | 2026-01-02 02:16:13 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:16:13.495184 | orchestrator | 2026-01-02 02:16:13 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:16:13.495400 | orchestrator | 2026-01-02 02:16:13 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:16:16.536970 | orchestrator | 2026-01-02 02:16:16 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:16:16.538731 | orchestrator | 2026-01-02 02:16:16 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:16:16.538827 | orchestrator | 2026-01-02 02:16:16 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:16:19.578262 | orchestrator | 2026-01-02 02:16:19 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:16:19.579391 | orchestrator | 2026-01-02 02:16:19 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:16:19.579482 | orchestrator | 2026-01-02 02:16:19 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:16:22.626156 | orchestrator | 2026-01-02 02:16:22 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:16:22.628489 | orchestrator | 2026-01-02 02:16:22 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:16:22.628528 | orchestrator | 2026-01-02 02:16:22 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:16:25.675098 | orchestrator | 2026-01-02 02:16:25 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:16:25.676987 | orchestrator | 2026-01-02 02:16:25 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:16:25.677018 | orchestrator | 2026-01-02 02:16:25 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:16:28.723613 | orchestrator | 2026-01-02 02:16:28 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:16:28.725685 | orchestrator | 2026-01-02 02:16:28 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:16:28.725749 | orchestrator | 2026-01-02 02:16:28 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:16:31.774829 | orchestrator | 2026-01-02 02:16:31 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:16:31.776710 | orchestrator | 2026-01-02 02:16:31 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:16:31.776857 | orchestrator | 2026-01-02 02:16:31 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:16:34.824856 | orchestrator | 2026-01-02 02:16:34 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:16:34.826673 | orchestrator | 2026-01-02 02:16:34 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:16:34.826967 | orchestrator | 2026-01-02 02:16:34 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:16:37.876046 | orchestrator | 2026-01-02 02:16:37 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:16:37.878566 | orchestrator | 2026-01-02 02:16:37 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:16:37.878886 | orchestrator | 2026-01-02 02:16:37 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:16:40.924001 | orchestrator | 2026-01-02 02:16:40 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:16:40.925591 | orchestrator | 2026-01-02 02:16:40 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:16:40.925623 | orchestrator | 2026-01-02 02:16:40 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:16:43.977843 | orchestrator | 2026-01-02 02:16:43 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:16:43.980756 | orchestrator | 2026-01-02 02:16:43 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:16:43.980792 | orchestrator | 2026-01-02 02:16:43 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:16:47.029397 | orchestrator | 2026-01-02 02:16:47 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:16:47.030794 | orchestrator | 2026-01-02 02:16:47 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:16:47.030823 | orchestrator | 2026-01-02 02:16:47 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:16:50.084276 | orchestrator | 2026-01-02 02:16:50 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:16:50.087361 | orchestrator | 2026-01-02 02:16:50 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:16:50.087414 | orchestrator | 2026-01-02 02:16:50 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:16:53.132486 | orchestrator | 2026-01-02 02:16:53 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:16:53.135457 | orchestrator | 2026-01-02 02:16:53 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:16:53.135561 | orchestrator | 2026-01-02 02:16:53 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:16:56.177768 | orchestrator | 2026-01-02 02:16:56 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:16:56.179398 | orchestrator | 2026-01-02 02:16:56 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:16:56.179752 | orchestrator | 2026-01-02 02:16:56 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:16:59.221203 | orchestrator | 2026-01-02 02:16:59 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:16:59.223164 | orchestrator | 2026-01-02 02:16:59 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:16:59.223484 | orchestrator | 2026-01-02 02:16:59 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:17:02.269117 | orchestrator | 2026-01-02 02:17:02 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:17:02.270318 | orchestrator | 2026-01-02 02:17:02 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:17:02.270515 | orchestrator | 2026-01-02 02:17:02 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:17:05.315058 | orchestrator | 2026-01-02 02:17:05 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:17:05.316785 | orchestrator | 2026-01-02 02:17:05 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:17:05.316879 | orchestrator | 2026-01-02 02:17:05 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:17:08.361884 | orchestrator | 2026-01-02 02:17:08 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:17:08.363844 | orchestrator | 2026-01-02 02:17:08 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:17:08.364254 | orchestrator | 2026-01-02 02:17:08 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:17:11.413542 | orchestrator | 2026-01-02 02:17:11 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:17:11.415569 | orchestrator | 2026-01-02 02:17:11 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:17:11.415659 | orchestrator | 2026-01-02 02:17:11 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:17:14.460817 | orchestrator | 2026-01-02 02:17:14 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:17:14.462717 | orchestrator | 2026-01-02 02:17:14 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:17:14.462772 | orchestrator | 2026-01-02 02:17:14 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:17:17.513753 | orchestrator | 2026-01-02 02:17:17 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:17:17.516343 | orchestrator | 2026-01-02 02:17:17 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:17:17.516455 | orchestrator | 2026-01-02 02:17:17 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:17:20.563111 | orchestrator | 2026-01-02 02:17:20 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:17:20.564185 | orchestrator | 2026-01-02 02:17:20 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:17:20.564294 | orchestrator | 2026-01-02 02:17:20 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:17:23.610922 | orchestrator | 2026-01-02 02:17:23 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:17:23.613696 | orchestrator | 2026-01-02 02:17:23 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:17:23.613868 | orchestrator | 2026-01-02 02:17:23 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:17:26.659038 | orchestrator | 2026-01-02 02:17:26 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:17:26.660440 | orchestrator | 2026-01-02 02:17:26 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:17:26.660532 | orchestrator | 2026-01-02 02:17:26 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:17:29.707907 | orchestrator | 2026-01-02 02:17:29 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:17:29.710862 | orchestrator | 2026-01-02 02:17:29 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:17:29.710935 | orchestrator | 2026-01-02 02:17:29 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:17:32.755311 | orchestrator | 2026-01-02 02:17:32 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:17:32.757681 | orchestrator | 2026-01-02 02:17:32 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:17:32.757716 | orchestrator | 2026-01-02 02:17:32 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:17:35.797705 | orchestrator | 2026-01-02 02:17:35 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:17:35.799373 | orchestrator | 2026-01-02 02:17:35 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:17:35.799401 | orchestrator | 2026-01-02 02:17:35 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:17:38.845482 | orchestrator | 2026-01-02 02:17:38 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:17:38.847420 | orchestrator | 2026-01-02 02:17:38 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:17:38.847457 | orchestrator | 2026-01-02 02:17:38 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:17:41.893058 | orchestrator | 2026-01-02 02:17:41 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:17:41.895339 | orchestrator | 2026-01-02 02:17:41 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:17:41.895672 | orchestrator | 2026-01-02 02:17:41 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:17:44.937464 | orchestrator | 2026-01-02 02:17:44 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:17:44.938734 | orchestrator | 2026-01-02 02:17:44 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:17:44.938883 | orchestrator | 2026-01-02 02:17:44 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:17:47.988934 | orchestrator | 2026-01-02 02:17:47 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:17:47.989974 | orchestrator | 2026-01-02 02:17:47 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:17:47.990006 | orchestrator | 2026-01-02 02:17:47 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:17:51.035388 | orchestrator | 2026-01-02 02:17:51 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:17:51.037458 | orchestrator | 2026-01-02 02:17:51 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:17:51.037496 | orchestrator | 2026-01-02 02:17:51 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:17:54.076405 | orchestrator | 2026-01-02 02:17:54 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:17:54.077382 | orchestrator | 2026-01-02 02:17:54 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:17:54.077420 | orchestrator | 2026-01-02 02:17:54 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:17:57.120600 | orchestrator | 2026-01-02 02:17:57 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:17:57.122725 | orchestrator | 2026-01-02 02:17:57 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:17:57.122785 | orchestrator | 2026-01-02 02:17:57 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:18:00.162430 | orchestrator | 2026-01-02 02:18:00 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:18:00.164233 | orchestrator | 2026-01-02 02:18:00 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:18:00.164325 | orchestrator | 2026-01-02 02:18:00 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:18:03.215546 | orchestrator | 2026-01-02 02:18:03 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:18:03.217388 | orchestrator | 2026-01-02 02:18:03 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:18:03.217422 | orchestrator | 2026-01-02 02:18:03 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:18:06.257803 | orchestrator | 2026-01-02 02:18:06 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:18:06.261241 | orchestrator | 2026-01-02 02:18:06 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:18:06.261330 | orchestrator | 2026-01-02 02:18:06 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:18:09.304258 | orchestrator | 2026-01-02 02:18:09 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:18:09.306361 | orchestrator | 2026-01-02 02:18:09 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:18:09.306423 | orchestrator | 2026-01-02 02:18:09 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:18:12.354380 | orchestrator | 2026-01-02 02:18:12 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:18:12.356131 | orchestrator | 2026-01-02 02:18:12 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:18:12.356250 | orchestrator | 2026-01-02 02:18:12 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:18:15.405982 | orchestrator | 2026-01-02 02:18:15 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:18:15.407559 | orchestrator | 2026-01-02 02:18:15 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:18:15.407603 | orchestrator | 2026-01-02 02:18:15 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:18:18.455139 | orchestrator | 2026-01-02 02:18:18 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:18:18.457684 | orchestrator | 2026-01-02 02:18:18 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:18:18.457883 | orchestrator | 2026-01-02 02:18:18 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:18:21.508120 | orchestrator | 2026-01-02 02:18:21 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:18:21.510168 | orchestrator | 2026-01-02 02:18:21 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:18:21.510259 | orchestrator | 2026-01-02 02:18:21 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:18:24.553772 | orchestrator | 2026-01-02 02:18:24 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:18:24.555340 | orchestrator | 2026-01-02 02:18:24 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:18:24.555395 | orchestrator | 2026-01-02 02:18:24 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:18:27.599566 | orchestrator | 2026-01-02 02:18:27 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:18:27.601525 | orchestrator | 2026-01-02 02:18:27 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:18:27.601879 | orchestrator | 2026-01-02 02:18:27 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:18:30.648981 | orchestrator | 2026-01-02 02:18:30 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:18:30.651550 | orchestrator | 2026-01-02 02:18:30 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:18:30.651800 | orchestrator | 2026-01-02 02:18:30 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:18:33.700407 | orchestrator | 2026-01-02 02:18:33 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:18:33.701661 | orchestrator | 2026-01-02 02:18:33 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:18:33.701782 | orchestrator | 2026-01-02 02:18:33 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:18:36.745849 | orchestrator | 2026-01-02 02:18:36 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:18:36.747331 | orchestrator | 2026-01-02 02:18:36 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:18:36.747497 | orchestrator | 2026-01-02 02:18:36 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:18:39.796054 | orchestrator | 2026-01-02 02:18:39 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:18:39.797359 | orchestrator | 2026-01-02 02:18:39 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:18:39.797577 | orchestrator | 2026-01-02 02:18:39 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:18:42.838363 | orchestrator | 2026-01-02 02:18:42 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:18:42.839450 | orchestrator | 2026-01-02 02:18:42 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:18:42.839486 | orchestrator | 2026-01-02 02:18:42 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:18:45.883947 | orchestrator | 2026-01-02 02:18:45 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:18:45.886365 | orchestrator | 2026-01-02 02:18:45 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:18:45.886584 | orchestrator | 2026-01-02 02:18:45 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:18:48.935060 | orchestrator | 2026-01-02 02:18:48 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:18:48.936264 | orchestrator | 2026-01-02 02:18:48 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:18:48.936322 | orchestrator | 2026-01-02 02:18:48 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:18:52.019533 | orchestrator | 2026-01-02 02:18:52 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:18:52.019747 | orchestrator | 2026-01-02 02:18:52 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:18:52.019773 | orchestrator | 2026-01-02 02:18:52 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:18:55.066315 | orchestrator | 2026-01-02 02:18:55 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:18:55.068858 | orchestrator | 2026-01-02 02:18:55 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:18:55.068886 | orchestrator | 2026-01-02 02:18:55 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:18:58.108066 | orchestrator | 2026-01-02 02:18:58 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:18:58.109712 | orchestrator | 2026-01-02 02:18:58 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:18:58.109762 | orchestrator | 2026-01-02 02:18:58 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:19:01.154682 | orchestrator | 2026-01-02 02:19:01 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:19:01.157082 | orchestrator | 2026-01-02 02:19:01 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:19:01.157134 | orchestrator | 2026-01-02 02:19:01 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:19:04.202247 | orchestrator | 2026-01-02 02:19:04 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:19:04.204328 | orchestrator | 2026-01-02 02:19:04 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:19:04.204366 | orchestrator | 2026-01-02 02:19:04 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:19:07.247670 | orchestrator | 2026-01-02 02:19:07 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:19:07.248758 | orchestrator | 2026-01-02 02:19:07 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:19:07.248790 | orchestrator | 2026-01-02 02:19:07 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:19:10.297118 | orchestrator | 2026-01-02 02:19:10 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:19:10.299400 | orchestrator | 2026-01-02 02:19:10 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:19:10.299546 | orchestrator | 2026-01-02 02:19:10 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:19:13.347094 | orchestrator | 2026-01-02 02:19:13 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:19:13.349045 | orchestrator | 2026-01-02 02:19:13 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:19:13.349077 | orchestrator | 2026-01-02 02:19:13 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:19:16.392875 | orchestrator | 2026-01-02 02:19:16 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:19:16.394630 | orchestrator | 2026-01-02 02:19:16 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:19:16.394959 | orchestrator | 2026-01-02 02:19:16 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:19:19.440817 | orchestrator | 2026-01-02 02:19:19 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:19:19.443084 | orchestrator | 2026-01-02 02:19:19 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:19:19.443200 | orchestrator | 2026-01-02 02:19:19 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:19:22.489422 | orchestrator | 2026-01-02 02:19:22 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:19:22.491711 | orchestrator | 2026-01-02 02:19:22 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:19:22.491961 | orchestrator | 2026-01-02 02:19:22 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:19:25.533847 | orchestrator | 2026-01-02 02:19:25 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:19:25.535066 | orchestrator | 2026-01-02 02:19:25 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:19:25.535106 | orchestrator | 2026-01-02 02:19:25 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:19:28.580744 | orchestrator | 2026-01-02 02:19:28 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:19:28.582404 | orchestrator | 2026-01-02 02:19:28 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:19:28.582611 | orchestrator | 2026-01-02 02:19:28 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:19:31.622407 | orchestrator | 2026-01-02 02:19:31 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:19:31.624094 | orchestrator | 2026-01-02 02:19:31 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:19:31.624211 | orchestrator | 2026-01-02 02:19:31 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:19:34.669499 | orchestrator | 2026-01-02 02:19:34 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:19:34.671652 | orchestrator | 2026-01-02 02:19:34 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:19:34.671699 | orchestrator | 2026-01-02 02:19:34 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:19:37.717524 | orchestrator | 2026-01-02 02:19:37 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:19:37.719802 | orchestrator | 2026-01-02 02:19:37 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:19:37.719842 | orchestrator | 2026-01-02 02:19:37 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:19:40.763256 | orchestrator | 2026-01-02 02:19:40 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:19:40.766967 | orchestrator | 2026-01-02 02:19:40 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:19:40.767036 | orchestrator | 2026-01-02 02:19:40 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:19:43.812690 | orchestrator | 2026-01-02 02:19:43 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:19:43.814824 | orchestrator | 2026-01-02 02:19:43 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:19:43.814867 | orchestrator | 2026-01-02 02:19:43 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:19:46.856689 | orchestrator | 2026-01-02 02:19:46 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:19:46.858581 | orchestrator | 2026-01-02 02:19:46 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:19:46.858637 | orchestrator | 2026-01-02 02:19:46 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:19:49.908970 | orchestrator | 2026-01-02 02:19:49 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:19:49.910525 | orchestrator | 2026-01-02 02:19:49 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:19:49.910640 | orchestrator | 2026-01-02 02:19:49 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:19:52.958238 | orchestrator | 2026-01-02 02:19:52 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:19:52.959991 | orchestrator | 2026-01-02 02:19:52 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:19:52.960108 | orchestrator | 2026-01-02 02:19:52 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:19:56.005274 | orchestrator | 2026-01-02 02:19:56 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:19:56.007426 | orchestrator | 2026-01-02 02:19:56 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:19:56.007560 | orchestrator | 2026-01-02 02:19:56 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:19:59.047118 | orchestrator | 2026-01-02 02:19:59 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:19:59.048159 | orchestrator | 2026-01-02 02:19:59 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:19:59.048189 | orchestrator | 2026-01-02 02:19:59 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:20:02.089280 | orchestrator | 2026-01-02 02:20:02 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:20:02.089966 | orchestrator | 2026-01-02 02:20:02 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:20:02.090079 | orchestrator | 2026-01-02 02:20:02 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:20:05.137432 | orchestrator | 2026-01-02 02:20:05 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:20:05.138736 | orchestrator | 2026-01-02 02:20:05 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:20:05.138783 | orchestrator | 2026-01-02 02:20:05 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:20:08.184205 | orchestrator | 2026-01-02 02:20:08 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:20:08.184850 | orchestrator | 2026-01-02 02:20:08 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:20:08.184885 | orchestrator | 2026-01-02 02:20:08 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:20:11.233133 | orchestrator | 2026-01-02 02:20:11 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:20:11.235390 | orchestrator | 2026-01-02 02:20:11 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:20:11.235855 | orchestrator | 2026-01-02 02:20:11 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:20:14.282159 | orchestrator | 2026-01-02 02:20:14 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:20:14.284273 | orchestrator | 2026-01-02 02:20:14 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:20:14.284299 | orchestrator | 2026-01-02 02:20:14 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:20:17.323629 | orchestrator | 2026-01-02 02:20:17 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:20:17.325225 | orchestrator | 2026-01-02 02:20:17 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:20:17.325265 | orchestrator | 2026-01-02 02:20:17 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:20:20.375603 | orchestrator | 2026-01-02 02:20:20 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:20:20.377254 | orchestrator | 2026-01-02 02:20:20 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:20:20.377397 | orchestrator | 2026-01-02 02:20:20 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:20:23.420312 | orchestrator | 2026-01-02 02:20:23 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:20:23.422672 | orchestrator | 2026-01-02 02:20:23 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:20:23.422755 | orchestrator | 2026-01-02 02:20:23 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:20:26.463366 | orchestrator | 2026-01-02 02:20:26 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:20:26.464918 | orchestrator | 2026-01-02 02:20:26 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:20:26.464962 | orchestrator | 2026-01-02 02:20:26 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:20:29.505565 | orchestrator | 2026-01-02 02:20:29 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:20:29.507471 | orchestrator | 2026-01-02 02:20:29 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:20:29.507606 | orchestrator | 2026-01-02 02:20:29 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:20:32.549119 | orchestrator | 2026-01-02 02:20:32 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:20:32.551112 | orchestrator | 2026-01-02 02:20:32 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:20:32.551147 | orchestrator | 2026-01-02 02:20:32 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:20:35.604285 | orchestrator | 2026-01-02 02:20:35 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:20:35.605689 | orchestrator | 2026-01-02 02:20:35 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:20:35.605776 | orchestrator | 2026-01-02 02:20:35 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:20:38.652455 | orchestrator | 2026-01-02 02:20:38 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:20:38.654059 | orchestrator | 2026-01-02 02:20:38 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:20:38.654085 | orchestrator | 2026-01-02 02:20:38 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:20:41.703705 | orchestrator | 2026-01-02 02:20:41 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:20:41.705789 | orchestrator | 2026-01-02 02:20:41 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:20:41.705828 | orchestrator | 2026-01-02 02:20:41 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:20:44.754975 | orchestrator | 2026-01-02 02:20:44 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:20:44.758192 | orchestrator | 2026-01-02 02:20:44 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:20:44.758229 | orchestrator | 2026-01-02 02:20:44 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:20:47.803235 | orchestrator | 2026-01-02 02:20:47 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:20:47.804177 | orchestrator | 2026-01-02 02:20:47 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:20:47.804214 | orchestrator | 2026-01-02 02:20:47 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:20:50.851296 | orchestrator | 2026-01-02 02:20:50 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:20:50.854281 | orchestrator | 2026-01-02 02:20:50 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:20:50.854442 | orchestrator | 2026-01-02 02:20:50 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:20:53.902585 | orchestrator | 2026-01-02 02:20:53 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:20:53.904127 | orchestrator | 2026-01-02 02:20:53 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:20:53.904163 | orchestrator | 2026-01-02 02:20:53 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:20:56.946795 | orchestrator | 2026-01-02 02:20:56 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:20:56.947802 | orchestrator | 2026-01-02 02:20:56 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:20:56.948279 | orchestrator | 2026-01-02 02:20:56 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:20:59.996192 | orchestrator | 2026-01-02 02:20:59 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:20:59.997300 | orchestrator | 2026-01-02 02:20:59 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:20:59.997337 | orchestrator | 2026-01-02 02:20:59 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:21:03.043635 | orchestrator | 2026-01-02 02:21:03 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:21:03.044583 | orchestrator | 2026-01-02 02:21:03 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:21:03.044659 | orchestrator | 2026-01-02 02:21:03 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:21:06.091470 | orchestrator | 2026-01-02 02:21:06 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:21:06.093189 | orchestrator | 2026-01-02 02:21:06 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:21:06.093298 | orchestrator | 2026-01-02 02:21:06 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:21:09.142871 | orchestrator | 2026-01-02 02:21:09 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:21:09.144318 | orchestrator | 2026-01-02 02:21:09 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:21:09.144568 | orchestrator | 2026-01-02 02:21:09 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:21:12.187247 | orchestrator | 2026-01-02 02:21:12 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:21:12.189310 | orchestrator | 2026-01-02 02:21:12 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:21:12.189761 | orchestrator | 2026-01-02 02:21:12 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:21:15.236615 | orchestrator | 2026-01-02 02:21:15 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:21:15.237190 | orchestrator | 2026-01-02 02:21:15 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:21:15.237276 | orchestrator | 2026-01-02 02:21:15 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:21:18.280643 | orchestrator | 2026-01-02 02:21:18 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:21:18.282174 | orchestrator | 2026-01-02 02:21:18 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:21:18.282232 | orchestrator | 2026-01-02 02:21:18 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:21:21.328475 | orchestrator | 2026-01-02 02:21:21 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:21:21.330156 | orchestrator | 2026-01-02 02:21:21 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:21:21.330201 | orchestrator | 2026-01-02 02:21:21 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:21:24.379224 | orchestrator | 2026-01-02 02:21:24 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:21:24.379884 | orchestrator | 2026-01-02 02:21:24 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:21:24.380055 | orchestrator | 2026-01-02 02:21:24 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:21:27.422561 | orchestrator | 2026-01-02 02:21:27 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:21:27.425126 | orchestrator | 2026-01-02 02:21:27 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:21:27.425174 | orchestrator | 2026-01-02 02:21:27 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:21:30.471668 | orchestrator | 2026-01-02 02:21:30 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:21:30.473708 | orchestrator | 2026-01-02 02:21:30 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:21:30.473757 | orchestrator | 2026-01-02 02:21:30 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:21:33.522202 | orchestrator | 2026-01-02 02:21:33 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:21:33.524320 | orchestrator | 2026-01-02 02:21:33 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:21:33.524347 | orchestrator | 2026-01-02 02:21:33 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:21:36.571241 | orchestrator | 2026-01-02 02:21:36 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:21:36.573006 | orchestrator | 2026-01-02 02:21:36 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:21:36.573043 | orchestrator | 2026-01-02 02:21:36 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:21:39.614750 | orchestrator | 2026-01-02 02:21:39 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:21:39.616617 | orchestrator | 2026-01-02 02:21:39 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:21:39.616727 | orchestrator | 2026-01-02 02:21:39 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:21:42.663486 | orchestrator | 2026-01-02 02:21:42 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:21:42.665859 | orchestrator | 2026-01-02 02:21:42 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:21:42.665903 | orchestrator | 2026-01-02 02:21:42 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:21:45.708094 | orchestrator | 2026-01-02 02:21:45 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:21:45.709204 | orchestrator | 2026-01-02 02:21:45 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:21:45.709271 | orchestrator | 2026-01-02 02:21:45 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:21:48.748772 | orchestrator | 2026-01-02 02:21:48 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:21:48.750789 | orchestrator | 2026-01-02 02:21:48 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:21:48.750835 | orchestrator | 2026-01-02 02:21:48 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:21:51.799183 | orchestrator | 2026-01-02 02:21:51 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:21:51.800169 | orchestrator | 2026-01-02 02:21:51 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:21:51.800224 | orchestrator | 2026-01-02 02:21:51 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:21:54.847780 | orchestrator | 2026-01-02 02:21:54 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:21:54.849511 | orchestrator | 2026-01-02 02:21:54 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:21:54.849684 | orchestrator | 2026-01-02 02:21:54 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:21:57.896442 | orchestrator | 2026-01-02 02:21:57 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:21:57.901720 | orchestrator | 2026-01-02 02:21:57 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:21:57.901830 | orchestrator | 2026-01-02 02:21:57 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:22:00.951695 | orchestrator | 2026-01-02 02:22:00 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:22:00.953536 | orchestrator | 2026-01-02 02:22:00 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:22:00.953605 | orchestrator | 2026-01-02 02:22:00 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:22:03.998838 | orchestrator | 2026-01-02 02:22:03 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:22:04.001500 | orchestrator | 2026-01-02 02:22:04 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:22:04.001546 | orchestrator | 2026-01-02 02:22:04 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:22:07.051714 | orchestrator | 2026-01-02 02:22:07 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:22:07.053298 | orchestrator | 2026-01-02 02:22:07 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:22:07.053671 | orchestrator | 2026-01-02 02:22:07 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:22:10.104067 | orchestrator | 2026-01-02 02:22:10 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:22:10.105462 | orchestrator | 2026-01-02 02:22:10 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:22:10.105524 | orchestrator | 2026-01-02 02:22:10 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:22:13.150825 | orchestrator | 2026-01-02 02:22:13 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:22:13.152323 | orchestrator | 2026-01-02 02:22:13 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:22:13.152359 | orchestrator | 2026-01-02 02:22:13 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:22:16.198447 | orchestrator | 2026-01-02 02:22:16 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:22:16.200685 | orchestrator | 2026-01-02 02:22:16 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:22:16.200713 | orchestrator | 2026-01-02 02:22:16 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:22:19.247817 | orchestrator | 2026-01-02 02:22:19 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:22:19.249846 | orchestrator | 2026-01-02 02:22:19 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:22:19.249882 | orchestrator | 2026-01-02 02:22:19 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:22:22.292572 | orchestrator | 2026-01-02 02:22:22 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:22:22.294206 | orchestrator | 2026-01-02 02:22:22 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:22:22.294271 | orchestrator | 2026-01-02 02:22:22 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:22:25.342270 | orchestrator | 2026-01-02 02:22:25 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:22:25.345692 | orchestrator | 2026-01-02 02:22:25 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:22:25.346144 | orchestrator | 2026-01-02 02:22:25 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:22:28.390706 | orchestrator | 2026-01-02 02:22:28 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:22:28.392470 | orchestrator | 2026-01-02 02:22:28 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:22:28.392550 | orchestrator | 2026-01-02 02:22:28 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:22:31.445572 | orchestrator | 2026-01-02 02:22:31 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:22:31.447936 | orchestrator | 2026-01-02 02:22:31 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:22:31.448045 | orchestrator | 2026-01-02 02:22:31 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:22:34.492687 | orchestrator | 2026-01-02 02:22:34 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:22:34.492928 | orchestrator | 2026-01-02 02:22:34 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:22:34.493435 | orchestrator | 2026-01-02 02:22:34 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:22:37.532356 | orchestrator | 2026-01-02 02:22:37 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:22:37.533780 | orchestrator | 2026-01-02 02:22:37 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:22:37.533846 | orchestrator | 2026-01-02 02:22:37 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:22:40.579078 | orchestrator | 2026-01-02 02:22:40 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:22:40.581594 | orchestrator | 2026-01-02 02:22:40 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:22:40.581645 | orchestrator | 2026-01-02 02:22:40 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:22:43.638578 | orchestrator | 2026-01-02 02:22:43 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:22:43.639929 | orchestrator | 2026-01-02 02:22:43 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:22:43.639970 | orchestrator | 2026-01-02 02:22:43 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:22:46.680055 | orchestrator | 2026-01-02 02:22:46 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:22:46.681636 | orchestrator | 2026-01-02 02:22:46 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:22:46.681664 | orchestrator | 2026-01-02 02:22:46 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:22:49.728784 | orchestrator | 2026-01-02 02:22:49 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:22:49.730821 | orchestrator | 2026-01-02 02:22:49 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:22:49.730911 | orchestrator | 2026-01-02 02:22:49 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:22:52.785870 | orchestrator | 2026-01-02 02:22:52 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:22:52.786950 | orchestrator | 2026-01-02 02:22:52 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:22:52.787020 | orchestrator | 2026-01-02 02:22:52 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:22:55.839589 | orchestrator | 2026-01-02 02:22:55 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:22:55.840751 | orchestrator | 2026-01-02 02:22:55 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:22:55.841206 | orchestrator | 2026-01-02 02:22:55 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:22:58.890933 | orchestrator | 2026-01-02 02:22:58 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:22:58.892866 | orchestrator | 2026-01-02 02:22:58 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:22:58.892923 | orchestrator | 2026-01-02 02:22:58 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:23:01.938558 | orchestrator | 2026-01-02 02:23:01 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:23:01.939755 | orchestrator | 2026-01-02 02:23:01 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:23:01.940058 | orchestrator | 2026-01-02 02:23:01 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:23:04.989942 | orchestrator | 2026-01-02 02:23:04 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:23:04.992478 | orchestrator | 2026-01-02 02:23:04 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:23:04.992709 | orchestrator | 2026-01-02 02:23:04 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:23:08.041553 | orchestrator | 2026-01-02 02:23:08 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:23:08.043144 | orchestrator | 2026-01-02 02:23:08 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:23:08.043239 | orchestrator | 2026-01-02 02:23:08 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:23:11.089037 | orchestrator | 2026-01-02 02:23:11 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:23:11.091044 | orchestrator | 2026-01-02 02:23:11 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:23:11.091148 | orchestrator | 2026-01-02 02:23:11 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:23:14.147252 | orchestrator | 2026-01-02 02:23:14 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:23:14.149626 | orchestrator | 2026-01-02 02:23:14 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:23:14.149667 | orchestrator | 2026-01-02 02:23:14 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:23:17.199894 | orchestrator | 2026-01-02 02:23:17 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:23:17.201601 | orchestrator | 2026-01-02 02:23:17 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:23:17.201831 | orchestrator | 2026-01-02 02:23:17 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:23:20.248284 | orchestrator | 2026-01-02 02:23:20 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:23:20.250200 | orchestrator | 2026-01-02 02:23:20 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:23:20.250279 | orchestrator | 2026-01-02 02:23:20 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:23:23.293220 | orchestrator | 2026-01-02 02:23:23 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:23:23.294404 | orchestrator | 2026-01-02 02:23:23 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:23:23.294572 | orchestrator | 2026-01-02 02:23:23 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:23:26.333934 | orchestrator | 2026-01-02 02:23:26 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:23:26.335625 | orchestrator | 2026-01-02 02:23:26 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:23:26.335671 | orchestrator | 2026-01-02 02:23:26 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:23:29.385120 | orchestrator | 2026-01-02 02:23:29 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:23:29.387219 | orchestrator | 2026-01-02 02:23:29 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:23:29.387502 | orchestrator | 2026-01-02 02:23:29 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:23:32.440254 | orchestrator | 2026-01-02 02:23:32 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:23:32.441930 | orchestrator | 2026-01-02 02:23:32 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:23:32.441983 | orchestrator | 2026-01-02 02:23:32 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:23:35.492139 | orchestrator | 2026-01-02 02:23:35 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:23:35.493079 | orchestrator | 2026-01-02 02:23:35 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:23:35.493112 | orchestrator | 2026-01-02 02:23:35 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:23:38.541116 | orchestrator | 2026-01-02 02:23:38 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:23:38.542379 | orchestrator | 2026-01-02 02:23:38 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:23:38.542418 | orchestrator | 2026-01-02 02:23:38 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:23:41.585316 | orchestrator | 2026-01-02 02:23:41 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:23:41.586839 | orchestrator | 2026-01-02 02:23:41 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:23:41.586873 | orchestrator | 2026-01-02 02:23:41 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:23:44.634312 | orchestrator | 2026-01-02 02:23:44 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:23:44.636621 | orchestrator | 2026-01-02 02:23:44 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:23:44.636674 | orchestrator | 2026-01-02 02:23:44 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:23:47.680851 | orchestrator | 2026-01-02 02:23:47 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:23:47.682253 | orchestrator | 2026-01-02 02:23:47 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:23:47.682284 | orchestrator | 2026-01-02 02:23:47 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:23:50.730110 | orchestrator | 2026-01-02 02:23:50 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:23:50.730946 | orchestrator | 2026-01-02 02:23:50 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:23:50.731123 | orchestrator | 2026-01-02 02:23:50 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:23:53.778379 | orchestrator | 2026-01-02 02:23:53 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:23:53.780019 | orchestrator | 2026-01-02 02:23:53 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:23:53.780084 | orchestrator | 2026-01-02 02:23:53 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:23:56.828922 | orchestrator | 2026-01-02 02:23:56 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:23:56.831199 | orchestrator | 2026-01-02 02:23:56 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:23:56.831233 | orchestrator | 2026-01-02 02:23:56 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:23:59.873391 | orchestrator | 2026-01-02 02:23:59 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:23:59.874351 | orchestrator | 2026-01-02 02:23:59 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:23:59.874472 | orchestrator | 2026-01-02 02:23:59 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:24:02.923150 | orchestrator | 2026-01-02 02:24:02 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:24:02.923975 | orchestrator | 2026-01-02 02:24:02 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:24:02.924011 | orchestrator | 2026-01-02 02:24:02 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:24:05.970203 | orchestrator | 2026-01-02 02:24:05 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:24:05.971424 | orchestrator | 2026-01-02 02:24:05 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:24:05.971471 | orchestrator | 2026-01-02 02:24:05 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:24:09.016083 | orchestrator | 2026-01-02 02:24:09 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:24:09.017521 | orchestrator | 2026-01-02 02:24:09 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:24:09.017601 | orchestrator | 2026-01-02 02:24:09 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:24:12.063257 | orchestrator | 2026-01-02 02:24:12 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:24:12.063995 | orchestrator | 2026-01-02 02:24:12 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:24:12.064034 | orchestrator | 2026-01-02 02:24:12 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:24:15.111782 | orchestrator | 2026-01-02 02:24:15 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:24:15.113210 | orchestrator | 2026-01-02 02:24:15 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:24:15.113383 | orchestrator | 2026-01-02 02:24:15 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:24:18.166149 | orchestrator | 2026-01-02 02:24:18 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:24:18.169528 | orchestrator | 2026-01-02 02:24:18 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:24:18.169558 | orchestrator | 2026-01-02 02:24:18 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:24:21.215644 | orchestrator | 2026-01-02 02:24:21 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:24:21.216149 | orchestrator | 2026-01-02 02:24:21 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:24:21.216191 | orchestrator | 2026-01-02 02:24:21 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:24:24.252442 | orchestrator | 2026-01-02 02:24:24 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:24:24.253844 | orchestrator | 2026-01-02 02:24:24 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:24:24.253888 | orchestrator | 2026-01-02 02:24:24 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:24:27.298423 | orchestrator | 2026-01-02 02:24:27 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:24:27.300056 | orchestrator | 2026-01-02 02:24:27 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:24:27.300265 | orchestrator | 2026-01-02 02:24:27 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:24:30.346079 | orchestrator | 2026-01-02 02:24:30 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:24:30.348159 | orchestrator | 2026-01-02 02:24:30 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:24:30.348877 | orchestrator | 2026-01-02 02:24:30 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:24:33.396978 | orchestrator | 2026-01-02 02:24:33 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:24:33.398916 | orchestrator | 2026-01-02 02:24:33 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:24:33.398981 | orchestrator | 2026-01-02 02:24:33 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:24:36.439557 | orchestrator | 2026-01-02 02:24:36 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:24:36.441558 | orchestrator | 2026-01-02 02:24:36 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:24:36.441605 | orchestrator | 2026-01-02 02:24:36 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:24:39.491282 | orchestrator | 2026-01-02 02:24:39 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:24:39.493318 | orchestrator | 2026-01-02 02:24:39 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:24:39.493513 | orchestrator | 2026-01-02 02:24:39 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:24:42.542094 | orchestrator | 2026-01-02 02:24:42 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:24:42.543361 | orchestrator | 2026-01-02 02:24:42 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:24:42.543399 | orchestrator | 2026-01-02 02:24:42 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:24:45.589232 | orchestrator | 2026-01-02 02:24:45 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:24:45.591294 | orchestrator | 2026-01-02 02:24:45 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:24:45.591348 | orchestrator | 2026-01-02 02:24:45 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:24:48.639994 | orchestrator | 2026-01-02 02:24:48 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:24:48.640839 | orchestrator | 2026-01-02 02:24:48 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:24:48.640900 | orchestrator | 2026-01-02 02:24:48 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:24:51.689893 | orchestrator | 2026-01-02 02:24:51 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:24:51.691671 | orchestrator | 2026-01-02 02:24:51 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:24:51.691816 | orchestrator | 2026-01-02 02:24:51 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:24:54.746346 | orchestrator | 2026-01-02 02:24:54 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:24:54.748286 | orchestrator | 2026-01-02 02:24:54 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:24:54.748442 | orchestrator | 2026-01-02 02:24:54 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:24:57.799878 | orchestrator | 2026-01-02 02:24:57 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:24:57.802342 | orchestrator | 2026-01-02 02:24:57 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:24:57.803014 | orchestrator | 2026-01-02 02:24:57 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:25:00.852927 | orchestrator | 2026-01-02 02:25:00 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:25:00.854196 | orchestrator | 2026-01-02 02:25:00 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:25:00.854993 | orchestrator | 2026-01-02 02:25:00 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:25:03.905533 | orchestrator | 2026-01-02 02:25:03 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:25:03.907901 | orchestrator | 2026-01-02 02:25:03 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:25:03.908021 | orchestrator | 2026-01-02 02:25:03 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:25:06.956766 | orchestrator | 2026-01-02 02:25:06 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:25:06.958466 | orchestrator | 2026-01-02 02:25:06 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:25:06.958638 | orchestrator | 2026-01-02 02:25:06 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:25:10.006978 | orchestrator | 2026-01-02 02:25:10 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:25:10.009776 | orchestrator | 2026-01-02 02:25:10 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:25:10.009819 | orchestrator | 2026-01-02 02:25:10 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:25:13.063888 | orchestrator | 2026-01-02 02:25:13 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:25:13.066412 | orchestrator | 2026-01-02 02:25:13 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:25:13.066526 | orchestrator | 2026-01-02 02:25:13 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:25:16.117239 | orchestrator | 2026-01-02 02:25:16 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:25:16.118715 | orchestrator | 2026-01-02 02:25:16 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:25:16.118764 | orchestrator | 2026-01-02 02:25:16 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:25:19.169681 | orchestrator | 2026-01-02 02:25:19 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:25:19.171863 | orchestrator | 2026-01-02 02:25:19 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:25:19.171900 | orchestrator | 2026-01-02 02:25:19 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:25:22.217328 | orchestrator | 2026-01-02 02:25:22 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:25:22.220667 | orchestrator | 2026-01-02 02:25:22 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:25:22.220972 | orchestrator | 2026-01-02 02:25:22 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:25:25.267044 | orchestrator | 2026-01-02 02:25:25 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:25:25.269600 | orchestrator | 2026-01-02 02:25:25 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:25:25.269645 | orchestrator | 2026-01-02 02:25:25 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:25:28.317426 | orchestrator | 2026-01-02 02:25:28 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:25:28.320282 | orchestrator | 2026-01-02 02:25:28 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:25:28.320345 | orchestrator | 2026-01-02 02:25:28 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:25:31.368951 | orchestrator | 2026-01-02 02:25:31 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:25:31.371740 | orchestrator | 2026-01-02 02:25:31 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:25:31.371819 | orchestrator | 2026-01-02 02:25:31 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:25:34.423355 | orchestrator | 2026-01-02 02:25:34 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:25:34.424093 | orchestrator | 2026-01-02 02:25:34 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:25:34.424194 | orchestrator | 2026-01-02 02:25:34 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:25:37.471812 | orchestrator | 2026-01-02 02:25:37 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:25:37.474405 | orchestrator | 2026-01-02 02:25:37 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:25:37.474466 | orchestrator | 2026-01-02 02:25:37 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:25:40.517441 | orchestrator | 2026-01-02 02:25:40 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:25:40.518324 | orchestrator | 2026-01-02 02:25:40 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:25:40.518480 | orchestrator | 2026-01-02 02:25:40 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:25:43.565714 | orchestrator | 2026-01-02 02:25:43 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:25:43.568163 | orchestrator | 2026-01-02 02:25:43 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:25:43.568208 | orchestrator | 2026-01-02 02:25:43 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:25:46.612714 | orchestrator | 2026-01-02 02:25:46 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:25:46.615165 | orchestrator | 2026-01-02 02:25:46 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:25:46.615274 | orchestrator | 2026-01-02 02:25:46 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:25:49.663980 | orchestrator | 2026-01-02 02:25:49 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:25:49.664906 | orchestrator | 2026-01-02 02:25:49 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:25:49.664940 | orchestrator | 2026-01-02 02:25:49 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:25:52.711635 | orchestrator | 2026-01-02 02:25:52 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:25:52.712800 | orchestrator | 2026-01-02 02:25:52 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:25:52.713112 | orchestrator | 2026-01-02 02:25:52 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:25:55.757411 | orchestrator | 2026-01-02 02:25:55 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:25:55.758797 | orchestrator | 2026-01-02 02:25:55 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:25:55.758840 | orchestrator | 2026-01-02 02:25:55 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:25:58.807127 | orchestrator | 2026-01-02 02:25:58 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:25:58.808369 | orchestrator | 2026-01-02 02:25:58 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:25:58.808645 | orchestrator | 2026-01-02 02:25:58 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:26:01.858853 | orchestrator | 2026-01-02 02:26:01 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:26:01.861673 | orchestrator | 2026-01-02 02:26:01 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:26:01.861775 | orchestrator | 2026-01-02 02:26:01 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:26:04.907009 | orchestrator | 2026-01-02 02:26:04 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:26:04.909596 | orchestrator | 2026-01-02 02:26:04 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:26:04.909635 | orchestrator | 2026-01-02 02:26:04 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:26:07.954329 | orchestrator | 2026-01-02 02:26:07 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:26:07.956172 | orchestrator | 2026-01-02 02:26:07 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:26:07.956220 | orchestrator | 2026-01-02 02:26:07 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:26:11.005152 | orchestrator | 2026-01-02 02:26:11 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:26:11.005997 | orchestrator | 2026-01-02 02:26:11 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:26:11.006129 | orchestrator | 2026-01-02 02:26:11 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:26:14.054981 | orchestrator | 2026-01-02 02:26:14 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:26:14.055713 | orchestrator | 2026-01-02 02:26:14 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:26:14.055748 | orchestrator | 2026-01-02 02:26:14 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:26:17.097634 | orchestrator | 2026-01-02 02:26:17 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:26:17.098728 | orchestrator | 2026-01-02 02:26:17 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:26:17.098814 | orchestrator | 2026-01-02 02:26:17 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:26:20.146139 | orchestrator | 2026-01-02 02:26:20 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:26:20.147911 | orchestrator | 2026-01-02 02:26:20 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:26:20.148052 | orchestrator | 2026-01-02 02:26:20 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:26:23.195719 | orchestrator | 2026-01-02 02:26:23 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:26:23.197202 | orchestrator | 2026-01-02 02:26:23 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:26:23.197293 | orchestrator | 2026-01-02 02:26:23 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:26:26.241409 | orchestrator | 2026-01-02 02:26:26 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:26:26.243122 | orchestrator | 2026-01-02 02:26:26 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:26:26.243152 | orchestrator | 2026-01-02 02:26:26 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:26:29.292717 | orchestrator | 2026-01-02 02:26:29 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:26:29.295041 | orchestrator | 2026-01-02 02:26:29 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:26:29.295192 | orchestrator | 2026-01-02 02:26:29 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:26:32.343332 | orchestrator | 2026-01-02 02:26:32 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:26:32.345198 | orchestrator | 2026-01-02 02:26:32 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:26:32.345284 | orchestrator | 2026-01-02 02:26:32 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:26:35.391228 | orchestrator | 2026-01-02 02:26:35 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:26:35.392893 | orchestrator | 2026-01-02 02:26:35 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:26:35.392913 | orchestrator | 2026-01-02 02:26:35 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:26:38.439493 | orchestrator | 2026-01-02 02:26:38 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:26:38.441970 | orchestrator | 2026-01-02 02:26:38 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:26:38.441993 | orchestrator | 2026-01-02 02:26:38 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:26:41.489307 | orchestrator | 2026-01-02 02:26:41 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:26:41.491660 | orchestrator | 2026-01-02 02:26:41 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:26:41.491755 | orchestrator | 2026-01-02 02:26:41 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:26:44.538359 | orchestrator | 2026-01-02 02:26:44 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:26:44.539246 | orchestrator | 2026-01-02 02:26:44 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:26:44.539284 | orchestrator | 2026-01-02 02:26:44 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:26:47.582978 | orchestrator | 2026-01-02 02:26:47 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:26:47.584679 | orchestrator | 2026-01-02 02:26:47 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:26:47.584711 | orchestrator | 2026-01-02 02:26:47 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:26:50.633434 | orchestrator | 2026-01-02 02:26:50 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:26:50.634902 | orchestrator | 2026-01-02 02:26:50 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:26:50.635016 | orchestrator | 2026-01-02 02:26:50 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:26:53.675306 | orchestrator | 2026-01-02 02:26:53 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:26:53.676315 | orchestrator | 2026-01-02 02:26:53 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:26:53.676352 | orchestrator | 2026-01-02 02:26:53 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:26:56.725512 | orchestrator | 2026-01-02 02:26:56 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:26:56.727742 | orchestrator | 2026-01-02 02:26:56 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:26:56.727780 | orchestrator | 2026-01-02 02:26:56 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:26:59.778694 | orchestrator | 2026-01-02 02:26:59 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:26:59.779992 | orchestrator | 2026-01-02 02:26:59 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:26:59.780028 | orchestrator | 2026-01-02 02:26:59 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:27:02.831236 | orchestrator | 2026-01-02 02:27:02 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:27:02.833419 | orchestrator | 2026-01-02 02:27:02 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:27:02.833476 | orchestrator | 2026-01-02 02:27:02 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:27:05.883158 | orchestrator | 2026-01-02 02:27:05 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:27:05.885279 | orchestrator | 2026-01-02 02:27:05 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:27:05.885316 | orchestrator | 2026-01-02 02:27:05 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:27:08.928236 | orchestrator | 2026-01-02 02:27:08 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:27:08.930289 | orchestrator | 2026-01-02 02:27:08 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:27:08.930605 | orchestrator | 2026-01-02 02:27:08 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:27:11.978457 | orchestrator | 2026-01-02 02:27:11 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:27:11.982574 | orchestrator | 2026-01-02 02:27:11 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:27:11.982630 | orchestrator | 2026-01-02 02:27:11 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:27:15.037717 | orchestrator | 2026-01-02 02:27:15 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:27:15.039147 | orchestrator | 2026-01-02 02:27:15 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:27:15.039183 | orchestrator | 2026-01-02 02:27:15 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:27:18.083510 | orchestrator | 2026-01-02 02:27:18 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:27:18.084974 | orchestrator | 2026-01-02 02:27:18 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:27:18.085042 | orchestrator | 2026-01-02 02:27:18 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:27:21.134282 | orchestrator | 2026-01-02 02:27:21 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:27:21.135559 | orchestrator | 2026-01-02 02:27:21 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:27:21.135610 | orchestrator | 2026-01-02 02:27:21 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:27:24.177722 | orchestrator | 2026-01-02 02:27:24 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:27:24.179340 | orchestrator | 2026-01-02 02:27:24 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:27:24.179479 | orchestrator | 2026-01-02 02:27:24 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:27:27.230593 | orchestrator | 2026-01-02 02:27:27 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:27:27.231978 | orchestrator | 2026-01-02 02:27:27 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:27:27.232057 | orchestrator | 2026-01-02 02:27:27 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:27:30.277784 | orchestrator | 2026-01-02 02:27:30 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:27:30.279863 | orchestrator | 2026-01-02 02:27:30 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:27:30.279912 | orchestrator | 2026-01-02 02:27:30 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:27:33.322102 | orchestrator | 2026-01-02 02:27:33 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:27:33.324252 | orchestrator | 2026-01-02 02:27:33 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:27:33.324294 | orchestrator | 2026-01-02 02:27:33 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:27:36.373222 | orchestrator | 2026-01-02 02:27:36 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:27:36.374811 | orchestrator | 2026-01-02 02:27:36 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:27:36.374864 | orchestrator | 2026-01-02 02:27:36 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:27:39.421159 | orchestrator | 2026-01-02 02:27:39 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:27:39.423495 | orchestrator | 2026-01-02 02:27:39 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:27:39.423597 | orchestrator | 2026-01-02 02:27:39 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:27:42.474622 | orchestrator | 2026-01-02 02:27:42 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:27:42.475411 | orchestrator | 2026-01-02 02:27:42 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:27:42.475442 | orchestrator | 2026-01-02 02:27:42 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:27:45.523095 | orchestrator | 2026-01-02 02:27:45 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:27:45.524332 | orchestrator | 2026-01-02 02:27:45 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:27:45.524388 | orchestrator | 2026-01-02 02:27:45 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:27:48.569790 | orchestrator | 2026-01-02 02:27:48 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:27:48.571446 | orchestrator | 2026-01-02 02:27:48 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:27:48.571497 | orchestrator | 2026-01-02 02:27:48 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:27:51.625057 | orchestrator | 2026-01-02 02:27:51 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:27:51.628114 | orchestrator | 2026-01-02 02:27:51 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:27:51.628162 | orchestrator | 2026-01-02 02:27:51 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:27:54.676025 | orchestrator | 2026-01-02 02:27:54 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:27:54.678211 | orchestrator | 2026-01-02 02:27:54 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:27:54.678356 | orchestrator | 2026-01-02 02:27:54 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:27:57.720005 | orchestrator | 2026-01-02 02:27:57 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:27:57.721756 | orchestrator | 2026-01-02 02:27:57 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:27:57.721808 | orchestrator | 2026-01-02 02:27:57 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:28:00.767181 | orchestrator | 2026-01-02 02:28:00 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:28:00.769221 | orchestrator | 2026-01-02 02:28:00 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:28:00.769496 | orchestrator | 2026-01-02 02:28:00 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:28:03.811691 | orchestrator | 2026-01-02 02:28:03 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:28:03.813865 | orchestrator | 2026-01-02 02:28:03 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:28:03.814012 | orchestrator | 2026-01-02 02:28:03 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:28:06.862485 | orchestrator | 2026-01-02 02:28:06 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:28:06.864259 | orchestrator | 2026-01-02 02:28:06 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:28:06.864326 | orchestrator | 2026-01-02 02:28:06 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:28:09.906355 | orchestrator | 2026-01-02 02:28:09 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:28:09.908292 | orchestrator | 2026-01-02 02:28:09 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:28:09.908386 | orchestrator | 2026-01-02 02:28:09 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:28:12.949709 | orchestrator | 2026-01-02 02:28:12 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:28:12.951367 | orchestrator | 2026-01-02 02:28:12 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:28:12.951628 | orchestrator | 2026-01-02 02:28:12 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:28:15.997948 | orchestrator | 2026-01-02 02:28:15 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:28:16.000495 | orchestrator | 2026-01-02 02:28:16 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:28:16.000616 | orchestrator | 2026-01-02 02:28:16 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:28:19.041904 | orchestrator | 2026-01-02 02:28:19 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:28:19.043182 | orchestrator | 2026-01-02 02:28:19 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:28:19.043230 | orchestrator | 2026-01-02 02:28:19 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:28:22.085341 | orchestrator | 2026-01-02 02:28:22 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:28:22.086114 | orchestrator | 2026-01-02 02:28:22 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:28:22.086157 | orchestrator | 2026-01-02 02:28:22 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:28:25.135430 | orchestrator | 2026-01-02 02:28:25 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:28:25.137821 | orchestrator | 2026-01-02 02:28:25 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:28:25.137859 | orchestrator | 2026-01-02 02:28:25 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:28:28.182777 | orchestrator | 2026-01-02 02:28:28 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:28:28.184494 | orchestrator | 2026-01-02 02:28:28 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:28:28.184665 | orchestrator | 2026-01-02 02:28:28 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:28:31.238872 | orchestrator | 2026-01-02 02:28:31 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:28:31.241288 | orchestrator | 2026-01-02 02:28:31 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:28:31.241493 | orchestrator | 2026-01-02 02:28:31 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:28:34.281951 | orchestrator | 2026-01-02 02:28:34 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:28:34.283846 | orchestrator | 2026-01-02 02:28:34 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:28:34.283937 | orchestrator | 2026-01-02 02:28:34 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:28:37.328901 | orchestrator | 2026-01-02 02:28:37 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:28:37.330716 | orchestrator | 2026-01-02 02:28:37 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:28:37.330799 | orchestrator | 2026-01-02 02:28:37 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:28:40.372215 | orchestrator | 2026-01-02 02:28:40 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:28:40.373382 | orchestrator | 2026-01-02 02:28:40 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:28:40.373439 | orchestrator | 2026-01-02 02:28:40 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:28:43.420737 | orchestrator | 2026-01-02 02:28:43 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:28:43.421990 | orchestrator | 2026-01-02 02:28:43 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:28:43.422122 | orchestrator | 2026-01-02 02:28:43 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:28:46.472840 | orchestrator | 2026-01-02 02:28:46 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:28:46.474010 | orchestrator | 2026-01-02 02:28:46 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:28:46.474115 | orchestrator | 2026-01-02 02:28:46 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:28:49.516481 | orchestrator | 2026-01-02 02:28:49 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:28:49.517921 | orchestrator | 2026-01-02 02:28:49 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:28:49.518222 | orchestrator | 2026-01-02 02:28:49 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:28:52.563234 | orchestrator | 2026-01-02 02:28:52 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:28:52.565332 | orchestrator | 2026-01-02 02:28:52 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:28:52.565412 | orchestrator | 2026-01-02 02:28:52 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:28:55.609854 | orchestrator | 2026-01-02 02:28:55 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:28:55.611417 | orchestrator | 2026-01-02 02:28:55 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:28:55.611946 | orchestrator | 2026-01-02 02:28:55 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:28:58.658534 | orchestrator | 2026-01-02 02:28:58 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:28:58.660165 | orchestrator | 2026-01-02 02:28:58 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:28:58.660234 | orchestrator | 2026-01-02 02:28:58 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:29:01.703895 | orchestrator | 2026-01-02 02:29:01 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:29:01.705450 | orchestrator | 2026-01-02 02:29:01 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:29:01.705489 | orchestrator | 2026-01-02 02:29:01 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:29:04.750522 | orchestrator | 2026-01-02 02:29:04 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:29:04.751252 | orchestrator | 2026-01-02 02:29:04 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:29:04.751378 | orchestrator | 2026-01-02 02:29:04 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:29:07.796052 | orchestrator | 2026-01-02 02:29:07 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:29:07.797968 | orchestrator | 2026-01-02 02:29:07 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:29:07.798205 | orchestrator | 2026-01-02 02:29:07 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:29:10.838628 | orchestrator | 2026-01-02 02:29:10 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:29:10.840380 | orchestrator | 2026-01-02 02:29:10 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:29:10.840427 | orchestrator | 2026-01-02 02:29:10 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:29:13.884976 | orchestrator | 2026-01-02 02:29:13 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:29:13.887319 | orchestrator | 2026-01-02 02:29:13 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:29:13.887369 | orchestrator | 2026-01-02 02:29:13 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:29:16.939270 | orchestrator | 2026-01-02 02:29:16 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:29:16.942950 | orchestrator | 2026-01-02 02:29:16 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:29:16.943028 | orchestrator | 2026-01-02 02:29:16 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:29:19.989219 | orchestrator | 2026-01-02 02:29:19 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:29:19.992152 | orchestrator | 2026-01-02 02:29:19 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:29:19.992185 | orchestrator | 2026-01-02 02:29:19 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:29:23.034348 | orchestrator | 2026-01-02 02:29:23 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:29:23.036053 | orchestrator | 2026-01-02 02:29:23 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:29:23.036204 | orchestrator | 2026-01-02 02:29:23 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:29:26.080801 | orchestrator | 2026-01-02 02:29:26 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:29:26.081820 | orchestrator | 2026-01-02 02:29:26 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:29:26.082008 | orchestrator | 2026-01-02 02:29:26 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:29:29.122671 | orchestrator | 2026-01-02 02:29:29 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:29:29.124396 | orchestrator | 2026-01-02 02:29:29 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:29:29.124528 | orchestrator | 2026-01-02 02:29:29 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:29:32.171718 | orchestrator | 2026-01-02 02:29:32 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:29:32.173487 | orchestrator | 2026-01-02 02:29:32 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:29:32.173522 | orchestrator | 2026-01-02 02:29:32 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:29:35.210333 | orchestrator | 2026-01-02 02:29:35 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:29:35.212800 | orchestrator | 2026-01-02 02:29:35 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:29:35.212840 | orchestrator | 2026-01-02 02:29:35 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:29:38.257160 | orchestrator | 2026-01-02 02:29:38 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:29:38.259424 | orchestrator | 2026-01-02 02:29:38 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:29:38.259466 | orchestrator | 2026-01-02 02:29:38 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:29:41.305677 | orchestrator | 2026-01-02 02:29:41 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:29:41.307151 | orchestrator | 2026-01-02 02:29:41 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:29:41.307275 | orchestrator | 2026-01-02 02:29:41 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:29:44.350296 | orchestrator | 2026-01-02 02:29:44 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:29:44.351530 | orchestrator | 2026-01-02 02:29:44 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:29:44.351713 | orchestrator | 2026-01-02 02:29:44 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:29:47.396353 | orchestrator | 2026-01-02 02:29:47 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:29:47.398298 | orchestrator | 2026-01-02 02:29:47 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:29:47.398334 | orchestrator | 2026-01-02 02:29:47 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:29:50.446738 | orchestrator | 2026-01-02 02:29:50 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:29:50.447976 | orchestrator | 2026-01-02 02:29:50 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:29:50.448030 | orchestrator | 2026-01-02 02:29:50 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:29:53.509402 | orchestrator | 2026-01-02 02:29:53 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:29:53.512079 | orchestrator | 2026-01-02 02:29:53 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:29:53.512163 | orchestrator | 2026-01-02 02:29:53 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:29:56.561263 | orchestrator | 2026-01-02 02:29:56 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:29:56.562356 | orchestrator | 2026-01-02 02:29:56 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:29:56.562404 | orchestrator | 2026-01-02 02:29:56 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:29:59.608349 | orchestrator | 2026-01-02 02:29:59 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:29:59.609429 | orchestrator | 2026-01-02 02:29:59 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:29:59.609469 | orchestrator | 2026-01-02 02:29:59 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:30:02.652262 | orchestrator | 2026-01-02 02:30:02 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:30:02.654850 | orchestrator | 2026-01-02 02:30:02 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:30:02.654886 | orchestrator | 2026-01-02 02:30:02 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:30:05.700655 | orchestrator | 2026-01-02 02:30:05 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:30:05.701376 | orchestrator | 2026-01-02 02:30:05 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:30:05.701409 | orchestrator | 2026-01-02 02:30:05 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:30:08.752945 | orchestrator | 2026-01-02 02:30:08 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:30:08.754658 | orchestrator | 2026-01-02 02:30:08 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:30:08.754968 | orchestrator | 2026-01-02 02:30:08 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:30:11.801924 | orchestrator | 2026-01-02 02:30:11 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:30:11.803263 | orchestrator | 2026-01-02 02:30:11 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:30:11.803327 | orchestrator | 2026-01-02 02:30:11 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:30:14.850496 | orchestrator | 2026-01-02 02:30:14 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:30:14.852442 | orchestrator | 2026-01-02 02:30:14 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:30:14.852525 | orchestrator | 2026-01-02 02:30:14 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:30:17.897365 | orchestrator | 2026-01-02 02:30:17 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:30:17.899769 | orchestrator | 2026-01-02 02:30:17 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:30:17.899871 | orchestrator | 2026-01-02 02:30:17 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:30:20.943313 | orchestrator | 2026-01-02 02:30:20 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:30:20.944760 | orchestrator | 2026-01-02 02:30:20 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:30:20.944960 | orchestrator | 2026-01-02 02:30:20 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:30:23.986328 | orchestrator | 2026-01-02 02:30:23 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:30:23.988190 | orchestrator | 2026-01-02 02:30:23 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:30:23.988225 | orchestrator | 2026-01-02 02:30:23 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:30:27.038343 | orchestrator | 2026-01-02 02:30:27 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:30:27.039713 | orchestrator | 2026-01-02 02:30:27 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:30:27.039748 | orchestrator | 2026-01-02 02:30:27 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:30:30.090465 | orchestrator | 2026-01-02 02:30:30 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:30:30.092818 | orchestrator | 2026-01-02 02:30:30 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:30:30.092873 | orchestrator | 2026-01-02 02:30:30 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:30:33.139840 | orchestrator | 2026-01-02 02:30:33 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:30:33.141300 | orchestrator | 2026-01-02 02:30:33 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:30:33.141473 | orchestrator | 2026-01-02 02:30:33 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:30:36.196121 | orchestrator | 2026-01-02 02:30:36 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:30:36.197722 | orchestrator | 2026-01-02 02:30:36 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:30:36.198362 | orchestrator | 2026-01-02 02:30:36 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:30:39.245824 | orchestrator | 2026-01-02 02:30:39 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:30:39.248336 | orchestrator | 2026-01-02 02:30:39 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:30:39.248380 | orchestrator | 2026-01-02 02:30:39 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:30:42.303764 | orchestrator | 2026-01-02 02:30:42 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:30:42.305986 | orchestrator | 2026-01-02 02:30:42 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:30:42.306091 | orchestrator | 2026-01-02 02:30:42 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:30:45.353207 | orchestrator | 2026-01-02 02:30:45 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:30:45.356993 | orchestrator | 2026-01-02 02:30:45 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:30:45.357119 | orchestrator | 2026-01-02 02:30:45 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:30:48.406756 | orchestrator | 2026-01-02 02:30:48 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:30:48.409415 | orchestrator | 2026-01-02 02:30:48 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:30:48.409549 | orchestrator | 2026-01-02 02:30:48 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:30:51.461223 | orchestrator | 2026-01-02 02:30:51 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:30:51.463138 | orchestrator | 2026-01-02 02:30:51 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:30:51.463274 | orchestrator | 2026-01-02 02:30:51 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:30:54.513970 | orchestrator | 2026-01-02 02:30:54 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:30:54.514978 | orchestrator | 2026-01-02 02:30:54 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:30:54.515064 | orchestrator | 2026-01-02 02:30:54 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:30:57.564392 | orchestrator | 2026-01-02 02:30:57 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:30:57.566276 | orchestrator | 2026-01-02 02:30:57 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:30:57.566430 | orchestrator | 2026-01-02 02:30:57 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:31:00.615703 | orchestrator | 2026-01-02 02:31:00 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:31:00.616337 | orchestrator | 2026-01-02 02:31:00 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:31:00.616683 | orchestrator | 2026-01-02 02:31:00 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:31:03.665870 | orchestrator | 2026-01-02 02:31:03 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:31:03.668163 | orchestrator | 2026-01-02 02:31:03 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:31:03.668310 | orchestrator | 2026-01-02 02:31:03 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:31:06.721311 | orchestrator | 2026-01-02 02:31:06 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:31:06.723849 | orchestrator | 2026-01-02 02:31:06 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:31:06.723987 | orchestrator | 2026-01-02 02:31:06 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:31:09.769852 | orchestrator | 2026-01-02 02:31:09 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:31:09.771413 | orchestrator | 2026-01-02 02:31:09 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:31:09.771624 | orchestrator | 2026-01-02 02:31:09 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:31:12.821110 | orchestrator | 2026-01-02 02:31:12 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:31:12.823359 | orchestrator | 2026-01-02 02:31:12 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:31:12.823387 | orchestrator | 2026-01-02 02:31:12 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:31:15.875987 | orchestrator | 2026-01-02 02:31:15 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:31:15.879067 | orchestrator | 2026-01-02 02:31:15 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:31:15.879123 | orchestrator | 2026-01-02 02:31:15 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:31:18.925697 | orchestrator | 2026-01-02 02:31:18 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:31:18.927072 | orchestrator | 2026-01-02 02:31:18 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:31:18.927113 | orchestrator | 2026-01-02 02:31:18 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:31:21.975743 | orchestrator | 2026-01-02 02:31:21 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:31:21.977184 | orchestrator | 2026-01-02 02:31:21 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:31:21.977218 | orchestrator | 2026-01-02 02:31:21 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:31:25.021393 | orchestrator | 2026-01-02 02:31:25 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:31:25.023927 | orchestrator | 2026-01-02 02:31:25 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:31:25.024000 | orchestrator | 2026-01-02 02:31:25 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:31:28.072973 | orchestrator | 2026-01-02 02:31:28 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:31:28.074793 | orchestrator | 2026-01-02 02:31:28 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:31:28.074827 | orchestrator | 2026-01-02 02:31:28 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:31:31.123545 | orchestrator | 2026-01-02 02:31:31 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:31:31.124911 | orchestrator | 2026-01-02 02:31:31 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:31:31.124947 | orchestrator | 2026-01-02 02:31:31 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:31:34.167430 | orchestrator | 2026-01-02 02:31:34 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:31:34.168793 | orchestrator | 2026-01-02 02:31:34 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:31:34.168861 | orchestrator | 2026-01-02 02:31:34 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:31:37.215904 | orchestrator | 2026-01-02 02:31:37 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:31:37.217569 | orchestrator | 2026-01-02 02:31:37 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:31:37.217717 | orchestrator | 2026-01-02 02:31:37 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:31:40.264001 | orchestrator | 2026-01-02 02:31:40 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:31:40.265933 | orchestrator | 2026-01-02 02:31:40 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:31:40.266161 | orchestrator | 2026-01-02 02:31:40 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:31:43.317889 | orchestrator | 2026-01-02 02:31:43 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:31:43.325684 | orchestrator | 2026-01-02 02:31:43 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:31:43.325756 | orchestrator | 2026-01-02 02:31:43 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:31:46.373483 | orchestrator | 2026-01-02 02:31:46 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:31:46.375216 | orchestrator | 2026-01-02 02:31:46 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:31:46.375304 | orchestrator | 2026-01-02 02:31:46 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:31:49.421827 | orchestrator | 2026-01-02 02:31:49 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:31:49.422679 | orchestrator | 2026-01-02 02:31:49 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:31:49.422719 | orchestrator | 2026-01-02 02:31:49 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:31:52.465542 | orchestrator | 2026-01-02 02:31:52 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:31:52.466760 | orchestrator | 2026-01-02 02:31:52 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:31:52.466795 | orchestrator | 2026-01-02 02:31:52 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:31:55.508489 | orchestrator | 2026-01-02 02:31:55 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:31:55.509535 | orchestrator | 2026-01-02 02:31:55 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:31:55.509580 | orchestrator | 2026-01-02 02:31:55 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:31:58.558550 | orchestrator | 2026-01-02 02:31:58 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:31:58.560024 | orchestrator | 2026-01-02 02:31:58 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:31:58.560125 | orchestrator | 2026-01-02 02:31:58 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:32:01.603582 | orchestrator | 2026-01-02 02:32:01 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:32:01.605183 | orchestrator | 2026-01-02 02:32:01 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:32:01.605215 | orchestrator | 2026-01-02 02:32:01 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:32:04.649393 | orchestrator | 2026-01-02 02:32:04 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:32:04.650739 | orchestrator | 2026-01-02 02:32:04 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:32:04.650787 | orchestrator | 2026-01-02 02:32:04 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:32:07.692776 | orchestrator | 2026-01-02 02:32:07 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:32:07.694759 | orchestrator | 2026-01-02 02:32:07 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:32:07.694865 | orchestrator | 2026-01-02 02:32:07 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:32:10.735861 | orchestrator | 2026-01-02 02:32:10 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:32:10.737001 | orchestrator | 2026-01-02 02:32:10 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:32:10.737041 | orchestrator | 2026-01-02 02:32:10 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:32:13.783064 | orchestrator | 2026-01-02 02:32:13 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:32:13.784760 | orchestrator | 2026-01-02 02:32:13 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:32:13.784922 | orchestrator | 2026-01-02 02:32:13 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:32:16.831561 | orchestrator | 2026-01-02 02:32:16 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:32:16.834679 | orchestrator | 2026-01-02 02:32:16 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:32:16.834731 | orchestrator | 2026-01-02 02:32:16 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:32:19.882652 | orchestrator | 2026-01-02 02:32:19 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:32:19.884822 | orchestrator | 2026-01-02 02:32:19 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:32:19.884875 | orchestrator | 2026-01-02 02:32:19 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:32:22.937593 | orchestrator | 2026-01-02 02:32:22 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:32:22.939115 | orchestrator | 2026-01-02 02:32:22 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:32:22.939165 | orchestrator | 2026-01-02 02:32:22 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:32:25.992167 | orchestrator | 2026-01-02 02:32:25 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:32:25.994452 | orchestrator | 2026-01-02 02:32:25 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:32:25.994691 | orchestrator | 2026-01-02 02:32:25 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:32:29.043479 | orchestrator | 2026-01-02 02:32:29 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:32:29.045521 | orchestrator | 2026-01-02 02:32:29 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:32:29.045567 | orchestrator | 2026-01-02 02:32:29 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:32:32.089260 | orchestrator | 2026-01-02 02:32:32 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:32:32.089820 | orchestrator | 2026-01-02 02:32:32 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:32:32.089857 | orchestrator | 2026-01-02 02:32:32 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:32:35.135705 | orchestrator | 2026-01-02 02:32:35 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:32:35.136941 | orchestrator | 2026-01-02 02:32:35 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:32:35.136978 | orchestrator | 2026-01-02 02:32:35 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:32:38.187348 | orchestrator | 2026-01-02 02:32:38 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:32:38.189943 | orchestrator | 2026-01-02 02:32:38 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:32:38.190014 | orchestrator | 2026-01-02 02:32:38 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:32:41.240459 | orchestrator | 2026-01-02 02:32:41 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:32:41.242330 | orchestrator | 2026-01-02 02:32:41 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:32:41.242364 | orchestrator | 2026-01-02 02:32:41 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:32:44.288472 | orchestrator | 2026-01-02 02:32:44 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:32:44.290416 | orchestrator | 2026-01-02 02:32:44 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:32:44.290489 | orchestrator | 2026-01-02 02:32:44 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:32:47.334542 | orchestrator | 2026-01-02 02:32:47 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:32:47.337372 | orchestrator | 2026-01-02 02:32:47 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:32:47.337709 | orchestrator | 2026-01-02 02:32:47 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:32:50.387037 | orchestrator | 2026-01-02 02:32:50 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:32:50.389966 | orchestrator | 2026-01-02 02:32:50 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:32:50.390298 | orchestrator | 2026-01-02 02:32:50 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:32:53.442861 | orchestrator | 2026-01-02 02:32:53 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:32:53.443935 | orchestrator | 2026-01-02 02:32:53 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:32:53.443962 | orchestrator | 2026-01-02 02:32:53 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:32:56.485413 | orchestrator | 2026-01-02 02:32:56 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:32:56.487340 | orchestrator | 2026-01-02 02:32:56 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:32:56.487404 | orchestrator | 2026-01-02 02:32:56 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:32:59.531670 | orchestrator | 2026-01-02 02:32:59 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:32:59.533880 | orchestrator | 2026-01-02 02:32:59 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:32:59.533940 | orchestrator | 2026-01-02 02:32:59 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:33:02.579883 | orchestrator | 2026-01-02 02:33:02 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:33:02.581546 | orchestrator | 2026-01-02 02:33:02 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:33:02.581738 | orchestrator | 2026-01-02 02:33:02 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:33:05.626301 | orchestrator | 2026-01-02 02:33:05 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:33:05.628996 | orchestrator | 2026-01-02 02:33:05 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:33:05.629045 | orchestrator | 2026-01-02 02:33:05 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:33:08.674300 | orchestrator | 2026-01-02 02:33:08 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:33:08.675046 | orchestrator | 2026-01-02 02:33:08 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:33:08.675817 | orchestrator | 2026-01-02 02:33:08 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:33:11.723150 | orchestrator | 2026-01-02 02:33:11 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:33:11.725052 | orchestrator | 2026-01-02 02:33:11 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:33:11.725087 | orchestrator | 2026-01-02 02:33:11 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:33:14.765903 | orchestrator | 2026-01-02 02:33:14 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:33:14.767370 | orchestrator | 2026-01-02 02:33:14 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:33:14.767462 | orchestrator | 2026-01-02 02:33:14 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:33:17.811983 | orchestrator | 2026-01-02 02:33:17 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:33:17.813748 | orchestrator | 2026-01-02 02:33:17 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:33:17.813787 | orchestrator | 2026-01-02 02:33:17 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:33:20.860199 | orchestrator | 2026-01-02 02:33:20 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:33:20.862503 | orchestrator | 2026-01-02 02:33:20 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:33:20.862605 | orchestrator | 2026-01-02 02:33:20 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:33:23.912520 | orchestrator | 2026-01-02 02:33:23 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:33:23.913759 | orchestrator | 2026-01-02 02:33:23 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:33:23.913787 | orchestrator | 2026-01-02 02:33:23 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:33:26.960916 | orchestrator | 2026-01-02 02:33:26 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:33:26.962382 | orchestrator | 2026-01-02 02:33:26 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:33:26.962559 | orchestrator | 2026-01-02 02:33:26 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:33:30.008421 | orchestrator | 2026-01-02 02:33:30 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:33:30.009823 | orchestrator | 2026-01-02 02:33:30 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:33:30.009849 | orchestrator | 2026-01-02 02:33:30 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:33:33.054429 | orchestrator | 2026-01-02 02:33:33 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:33:33.055921 | orchestrator | 2026-01-02 02:33:33 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:33:33.056128 | orchestrator | 2026-01-02 02:33:33 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:33:36.098434 | orchestrator | 2026-01-02 02:33:36 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:33:36.101182 | orchestrator | 2026-01-02 02:33:36 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:33:36.101260 | orchestrator | 2026-01-02 02:33:36 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:33:39.150791 | orchestrator | 2026-01-02 02:33:39 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:33:39.152352 | orchestrator | 2026-01-02 02:33:39 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:33:39.153316 | orchestrator | 2026-01-02 02:33:39 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:33:42.200932 | orchestrator | 2026-01-02 02:33:42 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:33:42.202269 | orchestrator | 2026-01-02 02:33:42 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:33:42.202313 | orchestrator | 2026-01-02 02:33:42 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:33:45.247242 | orchestrator | 2026-01-02 02:33:45 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:33:45.249978 | orchestrator | 2026-01-02 02:33:45 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:33:45.250091 | orchestrator | 2026-01-02 02:33:45 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:33:48.299519 | orchestrator | 2026-01-02 02:33:48 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:33:48.301158 | orchestrator | 2026-01-02 02:33:48 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:33:48.301524 | orchestrator | 2026-01-02 02:33:48 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:33:51.346382 | orchestrator | 2026-01-02 02:33:51 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:33:51.348476 | orchestrator | 2026-01-02 02:33:51 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:33:51.348503 | orchestrator | 2026-01-02 02:33:51 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:33:54.395926 | orchestrator | 2026-01-02 02:33:54 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:33:54.397569 | orchestrator | 2026-01-02 02:33:54 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:33:54.397639 | orchestrator | 2026-01-02 02:33:54 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:33:57.445735 | orchestrator | 2026-01-02 02:33:57 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:33:57.446755 | orchestrator | 2026-01-02 02:33:57 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:33:57.446941 | orchestrator | 2026-01-02 02:33:57 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:34:00.491497 | orchestrator | 2026-01-02 02:34:00 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:34:00.492927 | orchestrator | 2026-01-02 02:34:00 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:34:00.493525 | orchestrator | 2026-01-02 02:34:00 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:34:03.542893 | orchestrator | 2026-01-02 02:34:03 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:34:03.545233 | orchestrator | 2026-01-02 02:34:03 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:34:03.545311 | orchestrator | 2026-01-02 02:34:03 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:34:06.598317 | orchestrator | 2026-01-02 02:34:06 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:34:06.599952 | orchestrator | 2026-01-02 02:34:06 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:34:06.600077 | orchestrator | 2026-01-02 02:34:06 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:34:09.643535 | orchestrator | 2026-01-02 02:34:09 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:34:09.646248 | orchestrator | 2026-01-02 02:34:09 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:34:09.646338 | orchestrator | 2026-01-02 02:34:09 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:34:12.694363 | orchestrator | 2026-01-02 02:34:12 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:34:12.695837 | orchestrator | 2026-01-02 02:34:12 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:34:12.695875 | orchestrator | 2026-01-02 02:34:12 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:34:15.739721 | orchestrator | 2026-01-02 02:34:15 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:34:15.741528 | orchestrator | 2026-01-02 02:34:15 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:34:15.741586 | orchestrator | 2026-01-02 02:34:15 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:34:18.790786 | orchestrator | 2026-01-02 02:34:18 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:34:18.795865 | orchestrator | 2026-01-02 02:34:18 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:34:18.795912 | orchestrator | 2026-01-02 02:34:18 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:34:21.834334 | orchestrator | 2026-01-02 02:34:21 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:34:21.837081 | orchestrator | 2026-01-02 02:34:21 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:34:21.837134 | orchestrator | 2026-01-02 02:34:21 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:34:24.898249 | orchestrator | 2026-01-02 02:34:24 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:34:24.899502 | orchestrator | 2026-01-02 02:34:24 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:34:24.899569 | orchestrator | 2026-01-02 02:34:24 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:34:27.947753 | orchestrator | 2026-01-02 02:34:27 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:34:27.949283 | orchestrator | 2026-01-02 02:34:27 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:34:27.949430 | orchestrator | 2026-01-02 02:34:27 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:34:30.997965 | orchestrator | 2026-01-02 02:34:30 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:34:31.000177 | orchestrator | 2026-01-02 02:34:31 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:34:31.000357 | orchestrator | 2026-01-02 02:34:31 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:34:34.057133 | orchestrator | 2026-01-02 02:34:34 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:34:34.058112 | orchestrator | 2026-01-02 02:34:34 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:34:34.058187 | orchestrator | 2026-01-02 02:34:34 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:34:37.101554 | orchestrator | 2026-01-02 02:34:37 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:34:37.103634 | orchestrator | 2026-01-02 02:34:37 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:34:37.103708 | orchestrator | 2026-01-02 02:34:37 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:34:40.155094 | orchestrator | 2026-01-02 02:34:40 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:34:40.156703 | orchestrator | 2026-01-02 02:34:40 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:34:40.156895 | orchestrator | 2026-01-02 02:34:40 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:34:43.202082 | orchestrator | 2026-01-02 02:34:43 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:34:43.203113 | orchestrator | 2026-01-02 02:34:43 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:34:43.203262 | orchestrator | 2026-01-02 02:34:43 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:34:46.250335 | orchestrator | 2026-01-02 02:34:46 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:34:46.251991 | orchestrator | 2026-01-02 02:34:46 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:34:46.252138 | orchestrator | 2026-01-02 02:34:46 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:34:49.300907 | orchestrator | 2026-01-02 02:34:49 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:34:49.303423 | orchestrator | 2026-01-02 02:34:49 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:34:49.303511 | orchestrator | 2026-01-02 02:34:49 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:34:52.349176 | orchestrator | 2026-01-02 02:34:52 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:34:52.351956 | orchestrator | 2026-01-02 02:34:52 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:34:52.352356 | orchestrator | 2026-01-02 02:34:52 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:34:55.396909 | orchestrator | 2026-01-02 02:34:55 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:34:55.397920 | orchestrator | 2026-01-02 02:34:55 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:34:55.397959 | orchestrator | 2026-01-02 02:34:55 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:34:58.448388 | orchestrator | 2026-01-02 02:34:58 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:34:58.451119 | orchestrator | 2026-01-02 02:34:58 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:34:58.451170 | orchestrator | 2026-01-02 02:34:58 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:35:01.496332 | orchestrator | 2026-01-02 02:35:01 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:35:01.497960 | orchestrator | 2026-01-02 02:35:01 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:35:01.498064 | orchestrator | 2026-01-02 02:35:01 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:35:04.547498 | orchestrator | 2026-01-02 02:35:04 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:35:04.549028 | orchestrator | 2026-01-02 02:35:04 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:35:04.549484 | orchestrator | 2026-01-02 02:35:04 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:35:07.593990 | orchestrator | 2026-01-02 02:35:07 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:35:07.596499 | orchestrator | 2026-01-02 02:35:07 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:35:07.596587 | orchestrator | 2026-01-02 02:35:07 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:35:10.640566 | orchestrator | 2026-01-02 02:35:10 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:35:10.642504 | orchestrator | 2026-01-02 02:35:10 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:35:10.642641 | orchestrator | 2026-01-02 02:35:10 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:35:13.685453 | orchestrator | 2026-01-02 02:35:13 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:35:13.686884 | orchestrator | 2026-01-02 02:35:13 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:35:13.686919 | orchestrator | 2026-01-02 02:35:13 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:35:16.739069 | orchestrator | 2026-01-02 02:35:16 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:35:16.741446 | orchestrator | 2026-01-02 02:35:16 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:35:16.741768 | orchestrator | 2026-01-02 02:35:16 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:35:19.788601 | orchestrator | 2026-01-02 02:35:19 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:35:19.790508 | orchestrator | 2026-01-02 02:35:19 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:35:19.790950 | orchestrator | 2026-01-02 02:35:19 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:35:22.842252 | orchestrator | 2026-01-02 02:35:22 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:35:22.843299 | orchestrator | 2026-01-02 02:35:22 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:35:22.843540 | orchestrator | 2026-01-02 02:35:22 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:35:25.891757 | orchestrator | 2026-01-02 02:35:25 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:35:25.893686 | orchestrator | 2026-01-02 02:35:25 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:35:25.893719 | orchestrator | 2026-01-02 02:35:25 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:35:28.955298 | orchestrator | 2026-01-02 02:35:28 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:35:28.956561 | orchestrator | 2026-01-02 02:35:28 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:35:28.956616 | orchestrator | 2026-01-02 02:35:28 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:35:32.016538 | orchestrator | 2026-01-02 02:35:32 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:35:32.017486 | orchestrator | 2026-01-02 02:35:32 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:35:32.017532 | orchestrator | 2026-01-02 02:35:32 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:35:35.070559 | orchestrator | 2026-01-02 02:35:35 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:35:35.073802 | orchestrator | 2026-01-02 02:35:35 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:35:35.073866 | orchestrator | 2026-01-02 02:35:35 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:35:38.110614 | orchestrator | 2026-01-02 02:35:38 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:35:38.112310 | orchestrator | 2026-01-02 02:35:38 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:35:38.112347 | orchestrator | 2026-01-02 02:35:38 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:35:41.176583 | orchestrator | 2026-01-02 02:35:41 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:35:41.177620 | orchestrator | 2026-01-02 02:35:41 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:35:41.177838 | orchestrator | 2026-01-02 02:35:41 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:35:44.226543 | orchestrator | 2026-01-02 02:35:44 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:35:44.228873 | orchestrator | 2026-01-02 02:35:44 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:35:44.229182 | orchestrator | 2026-01-02 02:35:44 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:35:47.270544 | orchestrator | 2026-01-02 02:35:47 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:35:47.273837 | orchestrator | 2026-01-02 02:35:47 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:35:47.273876 | orchestrator | 2026-01-02 02:35:47 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:35:50.322559 | orchestrator | 2026-01-02 02:35:50 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:35:50.323832 | orchestrator | 2026-01-02 02:35:50 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:35:50.323879 | orchestrator | 2026-01-02 02:35:50 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:35:53.367336 | orchestrator | 2026-01-02 02:35:53 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:35:53.368451 | orchestrator | 2026-01-02 02:35:53 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:35:53.368483 | orchestrator | 2026-01-02 02:35:53 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:35:56.420912 | orchestrator | 2026-01-02 02:35:56 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:35:56.422946 | orchestrator | 2026-01-02 02:35:56 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:35:56.423047 | orchestrator | 2026-01-02 02:35:56 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:35:59.470354 | orchestrator | 2026-01-02 02:35:59 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:35:59.472899 | orchestrator | 2026-01-02 02:35:59 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:35:59.473073 | orchestrator | 2026-01-02 02:35:59 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:36:02.521867 | orchestrator | 2026-01-02 02:36:02 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:36:02.523605 | orchestrator | 2026-01-02 02:36:02 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:36:02.523686 | orchestrator | 2026-01-02 02:36:02 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:36:05.568447 | orchestrator | 2026-01-02 02:36:05 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:36:05.570459 | orchestrator | 2026-01-02 02:36:05 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:36:05.570509 | orchestrator | 2026-01-02 02:36:05 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:36:08.613420 | orchestrator | 2026-01-02 02:36:08 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:36:08.615498 | orchestrator | 2026-01-02 02:36:08 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:36:08.615541 | orchestrator | 2026-01-02 02:36:08 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:36:11.663166 | orchestrator | 2026-01-02 02:36:11 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:36:11.664547 | orchestrator | 2026-01-02 02:36:11 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:36:11.664607 | orchestrator | 2026-01-02 02:36:11 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:36:14.711387 | orchestrator | 2026-01-02 02:36:14 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:36:14.713187 | orchestrator | 2026-01-02 02:36:14 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:36:14.713354 | orchestrator | 2026-01-02 02:36:14 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:36:17.764465 | orchestrator | 2026-01-02 02:36:17 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:36:17.765388 | orchestrator | 2026-01-02 02:36:17 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:36:17.765419 | orchestrator | 2026-01-02 02:36:17 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:36:20.811633 | orchestrator | 2026-01-02 02:36:20 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:36:20.812785 | orchestrator | 2026-01-02 02:36:20 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:36:20.813110 | orchestrator | 2026-01-02 02:36:20 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:36:23.858157 | orchestrator | 2026-01-02 02:36:23 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:36:23.859258 | orchestrator | 2026-01-02 02:36:23 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:36:23.859316 | orchestrator | 2026-01-02 02:36:23 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:36:26.900977 | orchestrator | 2026-01-02 02:36:26 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:36:26.902560 | orchestrator | 2026-01-02 02:36:26 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:36:26.902590 | orchestrator | 2026-01-02 02:36:26 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:36:29.948042 | orchestrator | 2026-01-02 02:36:29 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:36:29.950222 | orchestrator | 2026-01-02 02:36:29 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:36:29.950343 | orchestrator | 2026-01-02 02:36:29 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:36:32.999758 | orchestrator | 2026-01-02 02:36:32 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:36:33.002292 | orchestrator | 2026-01-02 02:36:33 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:36:33.002317 | orchestrator | 2026-01-02 02:36:33 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:36:36.050713 | orchestrator | 2026-01-02 02:36:36 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:36:36.054250 | orchestrator | 2026-01-02 02:36:36 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:36:36.054337 | orchestrator | 2026-01-02 02:36:36 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:36:39.096029 | orchestrator | 2026-01-02 02:36:39 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:36:39.096456 | orchestrator | 2026-01-02 02:36:39 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:36:39.096486 | orchestrator | 2026-01-02 02:36:39 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:36:42.141186 | orchestrator | 2026-01-02 02:36:42 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:36:42.143098 | orchestrator | 2026-01-02 02:36:42 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:36:42.143139 | orchestrator | 2026-01-02 02:36:42 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:36:45.187936 | orchestrator | 2026-01-02 02:36:45 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:36:45.189467 | orchestrator | 2026-01-02 02:36:45 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:36:45.189562 | orchestrator | 2026-01-02 02:36:45 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:36:48.233158 | orchestrator | 2026-01-02 02:36:48 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:36:48.235565 | orchestrator | 2026-01-02 02:36:48 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:36:48.235604 | orchestrator | 2026-01-02 02:36:48 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:36:51.277717 | orchestrator | 2026-01-02 02:36:51 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:36:51.279212 | orchestrator | 2026-01-02 02:36:51 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:36:51.279278 | orchestrator | 2026-01-02 02:36:51 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:36:54.325946 | orchestrator | 2026-01-02 02:36:54 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:36:54.327732 | orchestrator | 2026-01-02 02:36:54 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:36:54.327808 | orchestrator | 2026-01-02 02:36:54 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:36:57.368906 | orchestrator | 2026-01-02 02:36:57 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:36:57.369959 | orchestrator | 2026-01-02 02:36:57 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:36:57.370110 | orchestrator | 2026-01-02 02:36:57 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:37:00.414065 | orchestrator | 2026-01-02 02:37:00 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:37:00.416314 | orchestrator | 2026-01-02 02:37:00 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:37:00.416359 | orchestrator | 2026-01-02 02:37:00 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:37:03.465622 | orchestrator | 2026-01-02 02:37:03 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:37:03.467308 | orchestrator | 2026-01-02 02:37:03 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:37:03.467364 | orchestrator | 2026-01-02 02:37:03 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:37:06.508865 | orchestrator | 2026-01-02 02:37:06 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:37:06.510359 | orchestrator | 2026-01-02 02:37:06 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:37:06.510397 | orchestrator | 2026-01-02 02:37:06 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:37:09.552530 | orchestrator | 2026-01-02 02:37:09 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:37:09.554401 | orchestrator | 2026-01-02 02:37:09 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:37:09.554439 | orchestrator | 2026-01-02 02:37:09 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:37:12.596355 | orchestrator | 2026-01-02 02:37:12 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:37:12.598153 | orchestrator | 2026-01-02 02:37:12 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:37:12.598282 | orchestrator | 2026-01-02 02:37:12 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:37:15.644108 | orchestrator | 2026-01-02 02:37:15 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:37:15.644642 | orchestrator | 2026-01-02 02:37:15 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:37:15.645140 | orchestrator | 2026-01-02 02:37:15 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:37:18.694895 | orchestrator | 2026-01-02 02:37:18 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:37:18.697105 | orchestrator | 2026-01-02 02:37:18 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:37:18.697377 | orchestrator | 2026-01-02 02:37:18 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:37:21.748559 | orchestrator | 2026-01-02 02:37:21 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:37:21.750000 | orchestrator | 2026-01-02 02:37:21 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:37:21.750190 | orchestrator | 2026-01-02 02:37:21 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:37:24.793794 | orchestrator | 2026-01-02 02:37:24 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:37:24.796343 | orchestrator | 2026-01-02 02:37:24 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:37:24.796407 | orchestrator | 2026-01-02 02:37:24 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:37:27.838137 | orchestrator | 2026-01-02 02:37:27 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:37:27.839788 | orchestrator | 2026-01-02 02:37:27 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:37:27.839868 | orchestrator | 2026-01-02 02:37:27 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:37:30.883137 | orchestrator | 2026-01-02 02:37:30 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:37:30.885130 | orchestrator | 2026-01-02 02:37:30 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:37:30.885158 | orchestrator | 2026-01-02 02:37:30 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:37:33.931321 | orchestrator | 2026-01-02 02:37:33 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:37:33.933098 | orchestrator | 2026-01-02 02:37:33 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:37:33.933140 | orchestrator | 2026-01-02 02:37:33 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:37:36.980062 | orchestrator | 2026-01-02 02:37:36 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:37:36.980710 | orchestrator | 2026-01-02 02:37:36 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:37:36.980899 | orchestrator | 2026-01-02 02:37:36 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:37:40.026799 | orchestrator | 2026-01-02 02:37:40 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:37:40.029649 | orchestrator | 2026-01-02 02:37:40 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:37:40.029794 | orchestrator | 2026-01-02 02:37:40 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:37:43.070375 | orchestrator | 2026-01-02 02:37:43 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:37:43.072244 | orchestrator | 2026-01-02 02:37:43 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:37:43.072286 | orchestrator | 2026-01-02 02:37:43 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:37:46.118331 | orchestrator | 2026-01-02 02:37:46 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:37:46.119500 | orchestrator | 2026-01-02 02:37:46 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:37:46.119764 | orchestrator | 2026-01-02 02:37:46 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:37:49.163894 | orchestrator | 2026-01-02 02:37:49 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:37:49.166155 | orchestrator | 2026-01-02 02:37:49 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:37:49.166340 | orchestrator | 2026-01-02 02:37:49 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:37:52.209616 | orchestrator | 2026-01-02 02:37:52 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:37:52.211007 | orchestrator | 2026-01-02 02:37:52 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:37:52.211094 | orchestrator | 2026-01-02 02:37:52 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:37:55.242276 | orchestrator | 2026-01-02 02:37:55 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:37:55.243670 | orchestrator | 2026-01-02 02:37:55 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:37:55.243835 | orchestrator | 2026-01-02 02:37:55 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:37:58.292366 | orchestrator | 2026-01-02 02:37:58 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:37:58.293403 | orchestrator | 2026-01-02 02:37:58 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:37:58.293507 | orchestrator | 2026-01-02 02:37:58 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:38:01.341673 | orchestrator | 2026-01-02 02:38:01 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:38:01.344439 | orchestrator | 2026-01-02 02:38:01 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:38:01.344475 | orchestrator | 2026-01-02 02:38:01 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:38:04.392495 | orchestrator | 2026-01-02 02:38:04 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:38:04.393856 | orchestrator | 2026-01-02 02:38:04 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:38:04.393889 | orchestrator | 2026-01-02 02:38:04 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:38:07.440648 | orchestrator | 2026-01-02 02:38:07 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:38:07.442425 | orchestrator | 2026-01-02 02:38:07 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:38:07.442464 | orchestrator | 2026-01-02 02:38:07 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:38:10.487756 | orchestrator | 2026-01-02 02:38:10 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:38:10.488599 | orchestrator | 2026-01-02 02:38:10 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:38:10.488732 | orchestrator | 2026-01-02 02:38:10 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:38:13.530382 | orchestrator | 2026-01-02 02:38:13 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:38:13.530610 | orchestrator | 2026-01-02 02:38:13 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:38:13.530632 | orchestrator | 2026-01-02 02:38:13 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:38:16.579038 | orchestrator | 2026-01-02 02:38:16 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:38:16.581006 | orchestrator | 2026-01-02 02:38:16 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:38:16.581058 | orchestrator | 2026-01-02 02:38:16 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:38:19.631382 | orchestrator | 2026-01-02 02:38:19 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:38:19.633557 | orchestrator | 2026-01-02 02:38:19 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:38:19.633827 | orchestrator | 2026-01-02 02:38:19 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:38:22.676666 | orchestrator | 2026-01-02 02:38:22 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:38:22.678169 | orchestrator | 2026-01-02 02:38:22 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:38:22.678334 | orchestrator | 2026-01-02 02:38:22 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:38:25.720217 | orchestrator | 2026-01-02 02:38:25 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:38:25.722354 | orchestrator | 2026-01-02 02:38:25 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:38:25.722460 | orchestrator | 2026-01-02 02:38:25 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:38:28.766940 | orchestrator | 2026-01-02 02:38:28 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:38:28.768902 | orchestrator | 2026-01-02 02:38:28 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:38:28.768949 | orchestrator | 2026-01-02 02:38:28 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:38:31.817678 | orchestrator | 2026-01-02 02:38:31 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:38:31.819615 | orchestrator | 2026-01-02 02:38:31 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:38:31.819755 | orchestrator | 2026-01-02 02:38:31 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:38:34.865463 | orchestrator | 2026-01-02 02:38:34 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:38:34.867421 | orchestrator | 2026-01-02 02:38:34 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:38:34.867477 | orchestrator | 2026-01-02 02:38:34 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:38:37.916283 | orchestrator | 2026-01-02 02:38:37 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:38:37.917986 | orchestrator | 2026-01-02 02:38:37 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:38:37.918304 | orchestrator | 2026-01-02 02:38:37 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:38:40.964007 | orchestrator | 2026-01-02 02:38:40 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:38:40.965300 | orchestrator | 2026-01-02 02:38:40 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:38:40.965323 | orchestrator | 2026-01-02 02:38:40 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:38:44.012804 | orchestrator | 2026-01-02 02:38:44 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:38:44.014591 | orchestrator | 2026-01-02 02:38:44 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:38:44.014633 | orchestrator | 2026-01-02 02:38:44 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:38:47.056048 | orchestrator | 2026-01-02 02:38:47 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:38:47.056746 | orchestrator | 2026-01-02 02:38:47 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:38:47.056779 | orchestrator | 2026-01-02 02:38:47 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:38:50.104773 | orchestrator | 2026-01-02 02:38:50 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:38:50.106682 | orchestrator | 2026-01-02 02:38:50 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:38:50.106763 | orchestrator | 2026-01-02 02:38:50 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:38:53.141190 | orchestrator | 2026-01-02 02:38:53 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:38:53.142472 | orchestrator | 2026-01-02 02:38:53 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:38:53.142521 | orchestrator | 2026-01-02 02:38:53 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:38:56.189296 | orchestrator | 2026-01-02 02:38:56 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:38:56.191905 | orchestrator | 2026-01-02 02:38:56 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:38:56.191949 | orchestrator | 2026-01-02 02:38:56 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:38:59.239200 | orchestrator | 2026-01-02 02:38:59 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:38:59.240413 | orchestrator | 2026-01-02 02:38:59 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:38:59.240514 | orchestrator | 2026-01-02 02:38:59 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:39:02.283467 | orchestrator | 2026-01-02 02:39:02 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:39:02.285456 | orchestrator | 2026-01-02 02:39:02 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:39:02.285638 | orchestrator | 2026-01-02 02:39:02 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:39:05.331889 | orchestrator | 2026-01-02 02:39:05 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:39:05.332910 | orchestrator | 2026-01-02 02:39:05 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:39:05.332950 | orchestrator | 2026-01-02 02:39:05 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:39:08.377246 | orchestrator | 2026-01-02 02:39:08 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:39:08.378861 | orchestrator | 2026-01-02 02:39:08 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:39:08.379129 | orchestrator | 2026-01-02 02:39:08 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:39:11.428884 | orchestrator | 2026-01-02 02:39:11 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:39:11.431029 | orchestrator | 2026-01-02 02:39:11 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:39:11.431077 | orchestrator | 2026-01-02 02:39:11 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:39:14.475347 | orchestrator | 2026-01-02 02:39:14 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:39:14.478229 | orchestrator | 2026-01-02 02:39:14 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:39:14.478303 | orchestrator | 2026-01-02 02:39:14 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:39:17.526903 | orchestrator | 2026-01-02 02:39:17 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:39:17.528534 | orchestrator | 2026-01-02 02:39:17 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:39:17.528577 | orchestrator | 2026-01-02 02:39:17 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:39:20.575611 | orchestrator | 2026-01-02 02:39:20 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:39:20.577258 | orchestrator | 2026-01-02 02:39:20 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:39:20.577290 | orchestrator | 2026-01-02 02:39:20 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:39:23.623640 | orchestrator | 2026-01-02 02:39:23 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:39:23.624753 | orchestrator | 2026-01-02 02:39:23 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:39:23.624806 | orchestrator | 2026-01-02 02:39:23 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:39:26.671745 | orchestrator | 2026-01-02 02:39:26 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:39:26.673479 | orchestrator | 2026-01-02 02:39:26 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:39:26.673499 | orchestrator | 2026-01-02 02:39:26 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:39:29.714587 | orchestrator | 2026-01-02 02:39:29 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:39:29.716761 | orchestrator | 2026-01-02 02:39:29 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:39:29.716962 | orchestrator | 2026-01-02 02:39:29 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:39:32.762960 | orchestrator | 2026-01-02 02:39:32 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:39:32.764674 | orchestrator | 2026-01-02 02:39:32 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:39:32.764808 | orchestrator | 2026-01-02 02:39:32 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:39:35.812662 | orchestrator | 2026-01-02 02:39:35 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:39:35.814254 | orchestrator | 2026-01-02 02:39:35 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:39:35.814284 | orchestrator | 2026-01-02 02:39:35 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:39:38.860055 | orchestrator | 2026-01-02 02:39:38 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:39:38.862813 | orchestrator | 2026-01-02 02:39:38 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:39:38.863322 | orchestrator | 2026-01-02 02:39:38 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:39:41.908485 | orchestrator | 2026-01-02 02:39:41 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:39:41.910172 | orchestrator | 2026-01-02 02:39:41 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:39:41.910219 | orchestrator | 2026-01-02 02:39:41 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:39:44.957217 | orchestrator | 2026-01-02 02:39:44 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:39:44.958766 | orchestrator | 2026-01-02 02:39:44 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:39:44.958806 | orchestrator | 2026-01-02 02:39:44 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:39:48.003462 | orchestrator | 2026-01-02 02:39:48 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:39:48.005149 | orchestrator | 2026-01-02 02:39:48 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:39:48.005189 | orchestrator | 2026-01-02 02:39:48 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:39:51.054480 | orchestrator | 2026-01-02 02:39:51 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:39:51.058300 | orchestrator | 2026-01-02 02:39:51 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:39:51.058467 | orchestrator | 2026-01-02 02:39:51 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:39:54.096910 | orchestrator | 2026-01-02 02:39:54 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:39:54.098487 | orchestrator | 2026-01-02 02:39:54 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:39:54.098544 | orchestrator | 2026-01-02 02:39:54 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:39:57.144783 | orchestrator | 2026-01-02 02:39:57 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:39:57.146141 | orchestrator | 2026-01-02 02:39:57 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:39:57.146237 | orchestrator | 2026-01-02 02:39:57 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:40:00.200903 | orchestrator | 2026-01-02 02:40:00 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:40:00.202992 | orchestrator | 2026-01-02 02:40:00 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:40:00.203067 | orchestrator | 2026-01-02 02:40:00 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:40:03.248556 | orchestrator | 2026-01-02 02:40:03 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:40:03.250177 | orchestrator | 2026-01-02 02:40:03 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:40:03.250228 | orchestrator | 2026-01-02 02:40:03 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:40:06.296367 | orchestrator | 2026-01-02 02:40:06 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:40:06.298542 | orchestrator | 2026-01-02 02:40:06 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:40:06.298625 | orchestrator | 2026-01-02 02:40:06 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:40:09.343774 | orchestrator | 2026-01-02 02:40:09 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:40:09.344295 | orchestrator | 2026-01-02 02:40:09 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:40:09.344326 | orchestrator | 2026-01-02 02:40:09 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:40:12.389000 | orchestrator | 2026-01-02 02:40:12 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:40:12.390918 | orchestrator | 2026-01-02 02:40:12 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:40:12.390946 | orchestrator | 2026-01-02 02:40:12 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:40:15.429660 | orchestrator | 2026-01-02 02:40:15 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:40:15.432408 | orchestrator | 2026-01-02 02:40:15 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:40:15.432450 | orchestrator | 2026-01-02 02:40:15 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:40:18.479787 | orchestrator | 2026-01-02 02:40:18 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:40:18.481168 | orchestrator | 2026-01-02 02:40:18 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:40:18.481201 | orchestrator | 2026-01-02 02:40:18 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:40:21.535618 | orchestrator | 2026-01-02 02:40:21 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:40:21.537007 | orchestrator | 2026-01-02 02:40:21 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:40:21.537051 | orchestrator | 2026-01-02 02:40:21 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:40:24.580274 | orchestrator | 2026-01-02 02:40:24 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:40:24.583699 | orchestrator | 2026-01-02 02:40:24 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:40:24.583822 | orchestrator | 2026-01-02 02:40:24 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:40:27.631533 | orchestrator | 2026-01-02 02:40:27 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:40:27.634307 | orchestrator | 2026-01-02 02:40:27 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:40:27.634378 | orchestrator | 2026-01-02 02:40:27 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:40:30.681932 | orchestrator | 2026-01-02 02:40:30 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:40:30.683779 | orchestrator | 2026-01-02 02:40:30 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:40:30.683984 | orchestrator | 2026-01-02 02:40:30 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:40:33.731603 | orchestrator | 2026-01-02 02:40:33 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:40:33.733547 | orchestrator | 2026-01-02 02:40:33 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:40:33.733619 | orchestrator | 2026-01-02 02:40:33 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:40:36.781567 | orchestrator | 2026-01-02 02:40:36 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:40:36.783028 | orchestrator | 2026-01-02 02:40:36 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:40:36.783199 | orchestrator | 2026-01-02 02:40:36 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:40:39.825986 | orchestrator | 2026-01-02 02:40:39 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:40:39.827973 | orchestrator | 2026-01-02 02:40:39 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:40:39.828033 | orchestrator | 2026-01-02 02:40:39 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:40:42.873003 | orchestrator | 2026-01-02 02:40:42 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:40:42.874708 | orchestrator | 2026-01-02 02:40:42 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:40:42.874773 | orchestrator | 2026-01-02 02:40:42 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:40:45.925411 | orchestrator | 2026-01-02 02:40:45 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:40:45.927292 | orchestrator | 2026-01-02 02:40:45 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:40:45.927345 | orchestrator | 2026-01-02 02:40:45 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:40:48.971162 | orchestrator | 2026-01-02 02:40:48 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:40:48.973496 | orchestrator | 2026-01-02 02:40:48 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:40:48.973637 | orchestrator | 2026-01-02 02:40:48 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:40:52.023551 | orchestrator | 2026-01-02 02:40:52 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:40:52.024654 | orchestrator | 2026-01-02 02:40:52 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:40:52.024864 | orchestrator | 2026-01-02 02:40:52 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:40:55.067923 | orchestrator | 2026-01-02 02:40:55 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:40:55.069309 | orchestrator | 2026-01-02 02:40:55 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:40:55.069407 | orchestrator | 2026-01-02 02:40:55 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:40:58.117583 | orchestrator | 2026-01-02 02:40:58 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:40:58.119429 | orchestrator | 2026-01-02 02:40:58 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:40:58.119561 | orchestrator | 2026-01-02 02:40:58 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:41:01.166304 | orchestrator | 2026-01-02 02:41:01 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:41:01.168364 | orchestrator | 2026-01-02 02:41:01 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:41:01.168621 | orchestrator | 2026-01-02 02:41:01 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:41:04.213258 | orchestrator | 2026-01-02 02:41:04 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:41:04.214521 | orchestrator | 2026-01-02 02:41:04 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:41:04.214679 | orchestrator | 2026-01-02 02:41:04 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:41:07.261162 | orchestrator | 2026-01-02 02:41:07 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:41:07.263384 | orchestrator | 2026-01-02 02:41:07 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:41:07.263452 | orchestrator | 2026-01-02 02:41:07 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:41:10.310935 | orchestrator | 2026-01-02 02:41:10 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:41:10.313151 | orchestrator | 2026-01-02 02:41:10 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:41:10.313329 | orchestrator | 2026-01-02 02:41:10 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:41:13.359523 | orchestrator | 2026-01-02 02:41:13 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:41:13.361415 | orchestrator | 2026-01-02 02:41:13 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:41:13.361959 | orchestrator | 2026-01-02 02:41:13 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:41:16.401868 | orchestrator | 2026-01-02 02:41:16 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:41:16.403716 | orchestrator | 2026-01-02 02:41:16 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:41:16.403784 | orchestrator | 2026-01-02 02:41:16 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:41:19.449549 | orchestrator | 2026-01-02 02:41:19 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:41:19.450581 | orchestrator | 2026-01-02 02:41:19 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:41:19.450620 | orchestrator | 2026-01-02 02:41:19 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:41:22.496050 | orchestrator | 2026-01-02 02:41:22 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:41:22.498121 | orchestrator | 2026-01-02 02:41:22 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:41:22.498216 | orchestrator | 2026-01-02 02:41:22 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:41:25.541075 | orchestrator | 2026-01-02 02:41:25 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:41:25.544881 | orchestrator | 2026-01-02 02:41:25 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:41:25.545471 | orchestrator | 2026-01-02 02:41:25 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:41:28.589639 | orchestrator | 2026-01-02 02:41:28 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:41:28.591352 | orchestrator | 2026-01-02 02:41:28 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:41:28.591438 | orchestrator | 2026-01-02 02:41:28 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:41:31.633887 | orchestrator | 2026-01-02 02:41:31 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:41:31.636238 | orchestrator | 2026-01-02 02:41:31 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:41:31.636328 | orchestrator | 2026-01-02 02:41:31 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:41:34.683098 | orchestrator | 2026-01-02 02:41:34 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:41:34.686610 | orchestrator | 2026-01-02 02:41:34 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:41:34.686666 | orchestrator | 2026-01-02 02:41:34 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:41:37.740498 | orchestrator | 2026-01-02 02:41:37 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:41:37.743042 | orchestrator | 2026-01-02 02:41:37 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:41:37.743133 | orchestrator | 2026-01-02 02:41:37 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:41:40.786016 | orchestrator | 2026-01-02 02:41:40 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:41:40.787544 | orchestrator | 2026-01-02 02:41:40 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:41:40.787613 | orchestrator | 2026-01-02 02:41:40 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:41:43.834613 | orchestrator | 2026-01-02 02:41:43 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:41:43.836490 | orchestrator | 2026-01-02 02:41:43 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:41:43.836530 | orchestrator | 2026-01-02 02:41:43 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:41:46.879530 | orchestrator | 2026-01-02 02:41:46 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:41:46.881708 | orchestrator | 2026-01-02 02:41:46 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:41:46.881806 | orchestrator | 2026-01-02 02:41:46 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:41:49.931800 | orchestrator | 2026-01-02 02:41:49 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:41:49.933878 | orchestrator | 2026-01-02 02:41:49 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:41:49.933985 | orchestrator | 2026-01-02 02:41:49 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:41:52.978376 | orchestrator | 2026-01-02 02:41:52 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:41:52.979682 | orchestrator | 2026-01-02 02:41:52 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:41:52.979795 | orchestrator | 2026-01-02 02:41:52 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:41:56.023598 | orchestrator | 2026-01-02 02:41:56 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:41:56.026211 | orchestrator | 2026-01-02 02:41:56 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:41:56.026245 | orchestrator | 2026-01-02 02:41:56 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:41:59.071216 | orchestrator | 2026-01-02 02:41:59 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:41:59.071887 | orchestrator | 2026-01-02 02:41:59 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:41:59.071921 | orchestrator | 2026-01-02 02:41:59 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:42:02.116694 | orchestrator | 2026-01-02 02:42:02 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:42:02.118377 | orchestrator | 2026-01-02 02:42:02 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:42:02.118476 | orchestrator | 2026-01-02 02:42:02 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:42:05.161520 | orchestrator | 2026-01-02 02:42:05 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:42:05.163583 | orchestrator | 2026-01-02 02:42:05 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:42:05.163624 | orchestrator | 2026-01-02 02:42:05 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:42:08.209733 | orchestrator | 2026-01-02 02:42:08 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:42:08.212055 | orchestrator | 2026-01-02 02:42:08 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:42:08.212115 | orchestrator | 2026-01-02 02:42:08 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:42:11.249625 | orchestrator | 2026-01-02 02:42:11 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:42:11.251927 | orchestrator | 2026-01-02 02:42:11 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:42:11.252287 | orchestrator | 2026-01-02 02:42:11 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:42:14.295255 | orchestrator | 2026-01-02 02:42:14 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:42:14.297589 | orchestrator | 2026-01-02 02:42:14 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:42:14.297740 | orchestrator | 2026-01-02 02:42:14 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:42:17.339935 | orchestrator | 2026-01-02 02:42:17 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:42:17.341317 | orchestrator | 2026-01-02 02:42:17 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:42:17.341464 | orchestrator | 2026-01-02 02:42:17 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:42:20.383733 | orchestrator | 2026-01-02 02:42:20 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:42:20.386394 | orchestrator | 2026-01-02 02:42:20 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:42:20.386487 | orchestrator | 2026-01-02 02:42:20 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:42:23.429430 | orchestrator | 2026-01-02 02:42:23 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:42:23.431871 | orchestrator | 2026-01-02 02:42:23 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:42:23.432041 | orchestrator | 2026-01-02 02:42:23 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:42:26.469621 | orchestrator | 2026-01-02 02:42:26 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:42:26.471129 | orchestrator | 2026-01-02 02:42:26 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:42:26.471173 | orchestrator | 2026-01-02 02:42:26 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:42:29.520665 | orchestrator | 2026-01-02 02:42:29 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:42:29.522291 | orchestrator | 2026-01-02 02:42:29 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:42:29.522335 | orchestrator | 2026-01-02 02:42:29 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:42:32.569313 | orchestrator | 2026-01-02 02:42:32 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:42:32.570833 | orchestrator | 2026-01-02 02:42:32 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:42:32.571832 | orchestrator | 2026-01-02 02:42:32 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:42:35.617952 | orchestrator | 2026-01-02 02:42:35 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:42:35.619060 | orchestrator | 2026-01-02 02:42:35 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:42:35.619133 | orchestrator | 2026-01-02 02:42:35 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:42:38.668641 | orchestrator | 2026-01-02 02:42:38 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:42:38.670629 | orchestrator | 2026-01-02 02:42:38 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:42:38.670799 | orchestrator | 2026-01-02 02:42:38 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:42:41.719081 | orchestrator | 2026-01-02 02:42:41 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:42:41.720296 | orchestrator | 2026-01-02 02:42:41 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:42:41.720332 | orchestrator | 2026-01-02 02:42:41 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:42:44.758444 | orchestrator | 2026-01-02 02:42:44 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:42:44.759890 | orchestrator | 2026-01-02 02:42:44 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:42:44.759940 | orchestrator | 2026-01-02 02:42:44 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:42:47.802696 | orchestrator | 2026-01-02 02:42:47 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:42:47.804689 | orchestrator | 2026-01-02 02:42:47 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:42:47.804879 | orchestrator | 2026-01-02 02:42:47 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:42:50.850517 | orchestrator | 2026-01-02 02:42:50 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:42:50.851572 | orchestrator | 2026-01-02 02:42:50 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:42:50.851587 | orchestrator | 2026-01-02 02:42:50 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:42:53.897384 | orchestrator | 2026-01-02 02:42:53 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:42:53.898835 | orchestrator | 2026-01-02 02:42:53 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:42:53.899185 | orchestrator | 2026-01-02 02:42:53 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:42:56.940337 | orchestrator | 2026-01-02 02:42:56 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:42:56.942118 | orchestrator | 2026-01-02 02:42:56 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:42:57.023591 | orchestrator | 2026-01-02 02:42:56 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:42:59.988516 | orchestrator | 2026-01-02 02:42:59 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:42:59.989640 | orchestrator | 2026-01-02 02:42:59 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:42:59.990162 | orchestrator | 2026-01-02 02:42:59 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:43:03.035395 | orchestrator | 2026-01-02 02:43:03 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:43:03.036480 | orchestrator | 2026-01-02 02:43:03 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:43:03.036530 | orchestrator | 2026-01-02 02:43:03 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:43:06.084032 | orchestrator | 2026-01-02 02:43:06 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:43:06.085678 | orchestrator | 2026-01-02 02:43:06 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:43:06.085719 | orchestrator | 2026-01-02 02:43:06 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:43:09.132522 | orchestrator | 2026-01-02 02:43:09 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:43:09.133363 | orchestrator | 2026-01-02 02:43:09 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:43:09.133705 | orchestrator | 2026-01-02 02:43:09 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:43:12.177885 | orchestrator | 2026-01-02 02:43:12 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:43:12.179122 | orchestrator | 2026-01-02 02:43:12 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:43:12.179262 | orchestrator | 2026-01-02 02:43:12 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:43:15.224394 | orchestrator | 2026-01-02 02:43:15 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:43:15.225050 | orchestrator | 2026-01-02 02:43:15 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:43:15.225083 | orchestrator | 2026-01-02 02:43:15 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:43:18.275090 | orchestrator | 2026-01-02 02:43:18 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:43:18.277467 | orchestrator | 2026-01-02 02:43:18 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:43:18.277516 | orchestrator | 2026-01-02 02:43:18 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:43:21.321162 | orchestrator | 2026-01-02 02:43:21 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:43:21.322526 | orchestrator | 2026-01-02 02:43:21 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:43:21.323111 | orchestrator | 2026-01-02 02:43:21 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:43:24.367194 | orchestrator | 2026-01-02 02:43:24 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:43:24.368829 | orchestrator | 2026-01-02 02:43:24 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:43:24.368877 | orchestrator | 2026-01-02 02:43:24 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:43:27.409483 | orchestrator | 2026-01-02 02:43:27 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:43:27.410967 | orchestrator | 2026-01-02 02:43:27 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:43:27.410998 | orchestrator | 2026-01-02 02:43:27 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:43:30.450675 | orchestrator | 2026-01-02 02:43:30 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:43:30.452577 | orchestrator | 2026-01-02 02:43:30 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:43:30.452596 | orchestrator | 2026-01-02 02:43:30 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:43:33.500968 | orchestrator | 2026-01-02 02:43:33 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:43:33.502654 | orchestrator | 2026-01-02 02:43:33 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:43:33.502845 | orchestrator | 2026-01-02 02:43:33 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:43:36.548724 | orchestrator | 2026-01-02 02:43:36 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:43:36.551709 | orchestrator | 2026-01-02 02:43:36 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:43:36.551758 | orchestrator | 2026-01-02 02:43:36 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:43:39.594543 | orchestrator | 2026-01-02 02:43:39 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:43:39.596446 | orchestrator | 2026-01-02 02:43:39 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:43:39.596480 | orchestrator | 2026-01-02 02:43:39 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:43:42.644475 | orchestrator | 2026-01-02 02:43:42 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:43:42.646138 | orchestrator | 2026-01-02 02:43:42 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:43:42.646183 | orchestrator | 2026-01-02 02:43:42 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:43:45.689085 | orchestrator | 2026-01-02 02:43:45 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:43:45.690483 | orchestrator | 2026-01-02 02:43:45 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:43:45.690518 | orchestrator | 2026-01-02 02:43:45 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:43:48.733640 | orchestrator | 2026-01-02 02:43:48 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:43:48.734872 | orchestrator | 2026-01-02 02:43:48 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:43:48.734920 | orchestrator | 2026-01-02 02:43:48 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:43:51.783605 | orchestrator | 2026-01-02 02:43:51 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:43:51.785303 | orchestrator | 2026-01-02 02:43:51 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:43:51.785400 | orchestrator | 2026-01-02 02:43:51 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:43:54.831328 | orchestrator | 2026-01-02 02:43:54 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:43:54.837076 | orchestrator | 2026-01-02 02:43:54 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:43:54.837151 | orchestrator | 2026-01-02 02:43:54 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:43:57.880104 | orchestrator | 2026-01-02 02:43:57 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:43:57.881218 | orchestrator | 2026-01-02 02:43:57 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:43:57.881312 | orchestrator | 2026-01-02 02:43:57 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:44:00.926705 | orchestrator | 2026-01-02 02:44:00 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:44:00.929289 | orchestrator | 2026-01-02 02:44:00 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:44:00.929368 | orchestrator | 2026-01-02 02:44:00 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:44:03.976998 | orchestrator | 2026-01-02 02:44:03 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:44:03.978075 | orchestrator | 2026-01-02 02:44:03 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:44:03.978128 | orchestrator | 2026-01-02 02:44:03 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:44:07.024277 | orchestrator | 2026-01-02 02:44:07 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:44:07.025381 | orchestrator | 2026-01-02 02:44:07 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:44:07.025475 | orchestrator | 2026-01-02 02:44:07 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:44:10.072575 | orchestrator | 2026-01-02 02:44:10 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:44:10.074669 | orchestrator | 2026-01-02 02:44:10 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:44:10.074709 | orchestrator | 2026-01-02 02:44:10 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:44:13.119073 | orchestrator | 2026-01-02 02:44:13 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:44:13.120739 | orchestrator | 2026-01-02 02:44:13 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:44:13.120977 | orchestrator | 2026-01-02 02:44:13 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:44:16.164851 | orchestrator | 2026-01-02 02:44:16 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:44:16.166548 | orchestrator | 2026-01-02 02:44:16 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:44:16.166594 | orchestrator | 2026-01-02 02:44:16 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:44:19.210335 | orchestrator | 2026-01-02 02:44:19 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:44:19.211982 | orchestrator | 2026-01-02 02:44:19 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:44:19.212027 | orchestrator | 2026-01-02 02:44:19 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:44:22.256436 | orchestrator | 2026-01-02 02:44:22 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:44:22.257462 | orchestrator | 2026-01-02 02:44:22 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:44:22.258198 | orchestrator | 2026-01-02 02:44:22 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:44:25.307168 | orchestrator | 2026-01-02 02:44:25 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:44:25.308751 | orchestrator | 2026-01-02 02:44:25 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:44:25.308848 | orchestrator | 2026-01-02 02:44:25 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:44:28.357251 | orchestrator | 2026-01-02 02:44:28 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:44:28.358550 | orchestrator | 2026-01-02 02:44:28 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:44:28.358932 | orchestrator | 2026-01-02 02:44:28 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:44:31.403769 | orchestrator | 2026-01-02 02:44:31 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:44:31.406739 | orchestrator | 2026-01-02 02:44:31 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:44:31.406807 | orchestrator | 2026-01-02 02:44:31 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:44:34.453234 | orchestrator | 2026-01-02 02:44:34 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:44:34.455460 | orchestrator | 2026-01-02 02:44:34 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:44:34.455535 | orchestrator | 2026-01-02 02:44:34 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:44:37.501597 | orchestrator | 2026-01-02 02:44:37 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:44:37.503179 | orchestrator | 2026-01-02 02:44:37 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:44:37.503210 | orchestrator | 2026-01-02 02:44:37 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:44:40.550375 | orchestrator | 2026-01-02 02:44:40 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:44:40.551636 | orchestrator | 2026-01-02 02:44:40 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:44:40.551731 | orchestrator | 2026-01-02 02:44:40 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:44:43.604268 | orchestrator | 2026-01-02 02:44:43 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:44:43.606002 | orchestrator | 2026-01-02 02:44:43 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:44:43.606406 | orchestrator | 2026-01-02 02:44:43 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:44:46.648882 | orchestrator | 2026-01-02 02:44:46 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:44:46.650272 | orchestrator | 2026-01-02 02:44:46 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:44:46.650375 | orchestrator | 2026-01-02 02:44:46 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:44:49.690128 | orchestrator | 2026-01-02 02:44:49 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:44:49.691122 | orchestrator | 2026-01-02 02:44:49 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:44:49.691167 | orchestrator | 2026-01-02 02:44:49 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:44:52.740123 | orchestrator | 2026-01-02 02:44:52 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:44:52.741152 | orchestrator | 2026-01-02 02:44:52 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:44:52.741223 | orchestrator | 2026-01-02 02:44:52 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:44:55.813874 | orchestrator | 2026-01-02 02:44:55 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:44:55.814182 | orchestrator | 2026-01-02 02:44:55 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:44:55.814212 | orchestrator | 2026-01-02 02:44:55 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:44:58.857192 | orchestrator | 2026-01-02 02:44:58 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:44:58.858755 | orchestrator | 2026-01-02 02:44:58 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:44:58.858837 | orchestrator | 2026-01-02 02:44:58 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:45:01.898341 | orchestrator | 2026-01-02 02:45:01 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:45:01.899973 | orchestrator | 2026-01-02 02:45:01 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:45:01.900035 | orchestrator | 2026-01-02 02:45:01 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:45:04.946855 | orchestrator | 2026-01-02 02:45:04 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:45:04.949216 | orchestrator | 2026-01-02 02:45:04 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:45:04.949252 | orchestrator | 2026-01-02 02:45:04 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:45:08.002826 | orchestrator | 2026-01-02 02:45:08 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:45:08.005082 | orchestrator | 2026-01-02 02:45:08 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:45:08.005133 | orchestrator | 2026-01-02 02:45:08 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:45:11.050991 | orchestrator | 2026-01-02 02:45:11 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:45:11.052678 | orchestrator | 2026-01-02 02:45:11 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:45:11.052754 | orchestrator | 2026-01-02 02:45:11 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:45:14.094730 | orchestrator | 2026-01-02 02:45:14 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:45:14.096840 | orchestrator | 2026-01-02 02:45:14 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:45:14.096902 | orchestrator | 2026-01-02 02:45:14 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:45:17.146827 | orchestrator | 2026-01-02 02:45:17 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:45:17.148921 | orchestrator | 2026-01-02 02:45:17 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:45:17.149007 | orchestrator | 2026-01-02 02:45:17 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:45:20.200205 | orchestrator | 2026-01-02 02:45:20 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:45:20.204494 | orchestrator | 2026-01-02 02:45:20 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:45:20.204569 | orchestrator | 2026-01-02 02:45:20 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:45:23.254809 | orchestrator | 2026-01-02 02:45:23 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:45:23.257490 | orchestrator | 2026-01-02 02:45:23 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:45:23.257523 | orchestrator | 2026-01-02 02:45:23 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:45:26.305298 | orchestrator | 2026-01-02 02:45:26 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:45:26.307651 | orchestrator | 2026-01-02 02:45:26 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:45:26.307874 | orchestrator | 2026-01-02 02:45:26 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:45:29.352241 | orchestrator | 2026-01-02 02:45:29 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:45:29.353487 | orchestrator | 2026-01-02 02:45:29 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:45:29.353536 | orchestrator | 2026-01-02 02:45:29 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:45:32.400298 | orchestrator | 2026-01-02 02:45:32 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:45:32.404354 | orchestrator | 2026-01-02 02:45:32 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:45:32.404585 | orchestrator | 2026-01-02 02:45:32 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:45:35.451431 | orchestrator | 2026-01-02 02:45:35 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:45:35.453588 | orchestrator | 2026-01-02 02:45:35 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:45:35.453627 | orchestrator | 2026-01-02 02:45:35 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:45:38.499113 | orchestrator | 2026-01-02 02:45:38 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:45:38.500610 | orchestrator | 2026-01-02 02:45:38 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:45:38.500731 | orchestrator | 2026-01-02 02:45:38 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:45:41.546778 | orchestrator | 2026-01-02 02:45:41 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:45:41.548284 | orchestrator | 2026-01-02 02:45:41 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:45:41.548322 | orchestrator | 2026-01-02 02:45:41 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:45:44.592519 | orchestrator | 2026-01-02 02:45:44 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:45:44.594309 | orchestrator | 2026-01-02 02:45:44 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:45:44.594360 | orchestrator | 2026-01-02 02:45:44 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:45:47.643884 | orchestrator | 2026-01-02 02:45:47 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:45:47.644837 | orchestrator | 2026-01-02 02:45:47 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:45:47.644888 | orchestrator | 2026-01-02 02:45:47 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:45:50.689317 | orchestrator | 2026-01-02 02:45:50 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:45:50.691847 | orchestrator | 2026-01-02 02:45:50 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:45:50.692006 | orchestrator | 2026-01-02 02:45:50 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:45:53.735923 | orchestrator | 2026-01-02 02:45:53 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:45:53.738511 | orchestrator | 2026-01-02 02:45:53 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:45:53.738608 | orchestrator | 2026-01-02 02:45:53 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:45:56.783192 | orchestrator | 2026-01-02 02:45:56 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:45:56.787730 | orchestrator | 2026-01-02 02:45:56 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:45:56.788073 | orchestrator | 2026-01-02 02:45:56 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:45:59.838211 | orchestrator | 2026-01-02 02:45:59 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:45:59.840380 | orchestrator | 2026-01-02 02:45:59 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:45:59.840426 | orchestrator | 2026-01-02 02:45:59 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:46:02.887337 | orchestrator | 2026-01-02 02:46:02 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:46:02.890406 | orchestrator | 2026-01-02 02:46:02 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:46:02.890464 | orchestrator | 2026-01-02 02:46:02 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:46:05.937414 | orchestrator | 2026-01-02 02:46:05 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:46:05.940520 | orchestrator | 2026-01-02 02:46:05 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:46:05.940612 | orchestrator | 2026-01-02 02:46:05 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:46:08.983935 | orchestrator | 2026-01-02 02:46:08 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:46:08.986244 | orchestrator | 2026-01-02 02:46:08 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:46:08.986295 | orchestrator | 2026-01-02 02:46:08 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:46:12.028112 | orchestrator | 2026-01-02 02:46:12 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:46:12.029135 | orchestrator | 2026-01-02 02:46:12 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:46:12.029214 | orchestrator | 2026-01-02 02:46:12 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:46:15.072465 | orchestrator | 2026-01-02 02:46:15 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:46:15.074742 | orchestrator | 2026-01-02 02:46:15 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:46:15.074800 | orchestrator | 2026-01-02 02:46:15 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:46:18.121786 | orchestrator | 2026-01-02 02:46:18 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:46:18.123414 | orchestrator | 2026-01-02 02:46:18 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:46:18.123467 | orchestrator | 2026-01-02 02:46:18 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:46:21.169969 | orchestrator | 2026-01-02 02:46:21 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:46:21.171499 | orchestrator | 2026-01-02 02:46:21 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:46:21.171560 | orchestrator | 2026-01-02 02:46:21 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:46:24.214571 | orchestrator | 2026-01-02 02:46:24 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:46:24.216075 | orchestrator | 2026-01-02 02:46:24 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:46:24.216122 | orchestrator | 2026-01-02 02:46:24 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:46:27.256259 | orchestrator | 2026-01-02 02:46:27 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:46:27.258348 | orchestrator | 2026-01-02 02:46:27 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:46:27.258568 | orchestrator | 2026-01-02 02:46:27 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:46:30.305316 | orchestrator | 2026-01-02 02:46:30 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:46:30.306924 | orchestrator | 2026-01-02 02:46:30 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:46:30.307497 | orchestrator | 2026-01-02 02:46:30 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:46:33.361536 | orchestrator | 2026-01-02 02:46:33 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:46:33.362613 | orchestrator | 2026-01-02 02:46:33 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:46:33.362657 | orchestrator | 2026-01-02 02:46:33 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:46:36.404971 | orchestrator | 2026-01-02 02:46:36 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:46:36.406486 | orchestrator | 2026-01-02 02:46:36 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:46:36.406534 | orchestrator | 2026-01-02 02:46:36 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:46:39.454208 | orchestrator | 2026-01-02 02:46:39 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:46:39.455699 | orchestrator | 2026-01-02 02:46:39 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:46:39.455721 | orchestrator | 2026-01-02 02:46:39 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:46:42.505455 | orchestrator | 2026-01-02 02:46:42 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:46:42.507107 | orchestrator | 2026-01-02 02:46:42 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:46:42.507144 | orchestrator | 2026-01-02 02:46:42 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:46:45.550253 | orchestrator | 2026-01-02 02:46:45 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:46:45.551896 | orchestrator | 2026-01-02 02:46:45 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:46:45.552034 | orchestrator | 2026-01-02 02:46:45 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:46:48.602531 | orchestrator | 2026-01-02 02:46:48 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:46:48.604266 | orchestrator | 2026-01-02 02:46:48 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:46:48.604440 | orchestrator | 2026-01-02 02:46:48 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:46:51.647904 | orchestrator | 2026-01-02 02:46:51 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:46:51.648957 | orchestrator | 2026-01-02 02:46:51 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:46:51.648998 | orchestrator | 2026-01-02 02:46:51 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:46:54.695164 | orchestrator | 2026-01-02 02:46:54 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:46:54.696489 | orchestrator | 2026-01-02 02:46:54 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:46:54.697018 | orchestrator | 2026-01-02 02:46:54 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:46:57.741037 | orchestrator | 2026-01-02 02:46:57 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:46:57.742244 | orchestrator | 2026-01-02 02:46:57 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:46:57.742276 | orchestrator | 2026-01-02 02:46:57 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:47:00.785664 | orchestrator | 2026-01-02 02:47:00 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:47:00.787905 | orchestrator | 2026-01-02 02:47:00 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:47:00.788025 | orchestrator | 2026-01-02 02:47:00 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:47:03.839639 | orchestrator | 2026-01-02 02:47:03 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:47:03.841530 | orchestrator | 2026-01-02 02:47:03 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:47:03.841733 | orchestrator | 2026-01-02 02:47:03 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:47:06.888296 | orchestrator | 2026-01-02 02:47:06 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:47:06.889873 | orchestrator | 2026-01-02 02:47:06 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:47:06.889956 | orchestrator | 2026-01-02 02:47:06 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:47:09.930155 | orchestrator | 2026-01-02 02:47:09 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:47:09.931000 | orchestrator | 2026-01-02 02:47:09 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:47:09.931043 | orchestrator | 2026-01-02 02:47:09 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:47:12.979060 | orchestrator | 2026-01-02 02:47:12 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:47:12.980126 | orchestrator | 2026-01-02 02:47:12 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:47:12.980151 | orchestrator | 2026-01-02 02:47:12 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:47:16.029029 | orchestrator | 2026-01-02 02:47:16 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:47:16.030615 | orchestrator | 2026-01-02 02:47:16 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:47:16.030698 | orchestrator | 2026-01-02 02:47:16 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:47:19.078207 | orchestrator | 2026-01-02 02:47:19 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:47:19.079515 | orchestrator | 2026-01-02 02:47:19 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:47:19.079551 | orchestrator | 2026-01-02 02:47:19 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:47:22.128093 | orchestrator | 2026-01-02 02:47:22 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:47:22.130327 | orchestrator | 2026-01-02 02:47:22 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:47:22.130642 | orchestrator | 2026-01-02 02:47:22 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:47:25.176564 | orchestrator | 2026-01-02 02:47:25 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:47:25.178915 | orchestrator | 2026-01-02 02:47:25 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:47:25.179068 | orchestrator | 2026-01-02 02:47:25 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:47:28.224351 | orchestrator | 2026-01-02 02:47:28 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:47:28.225696 | orchestrator | 2026-01-02 02:47:28 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:47:28.225782 | orchestrator | 2026-01-02 02:47:28 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:47:31.267603 | orchestrator | 2026-01-02 02:47:31 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:47:31.270180 | orchestrator | 2026-01-02 02:47:31 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:47:31.270255 | orchestrator | 2026-01-02 02:47:31 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:47:34.317043 | orchestrator | 2026-01-02 02:47:34 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:47:34.319084 | orchestrator | 2026-01-02 02:47:34 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:47:34.319171 | orchestrator | 2026-01-02 02:47:34 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:47:37.365641 | orchestrator | 2026-01-02 02:47:37 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:47:37.368219 | orchestrator | 2026-01-02 02:47:37 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:47:37.368457 | orchestrator | 2026-01-02 02:47:37 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:47:40.408558 | orchestrator | 2026-01-02 02:47:40 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:47:40.409128 | orchestrator | 2026-01-02 02:47:40 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:47:40.409165 | orchestrator | 2026-01-02 02:47:40 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:47:43.453046 | orchestrator | 2026-01-02 02:47:43 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:47:43.455227 | orchestrator | 2026-01-02 02:47:43 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:47:43.455287 | orchestrator | 2026-01-02 02:47:43 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:47:46.503115 | orchestrator | 2026-01-02 02:47:46 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:47:46.504332 | orchestrator | 2026-01-02 02:47:46 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:47:46.504369 | orchestrator | 2026-01-02 02:47:46 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:47:49.556281 | orchestrator | 2026-01-02 02:47:49 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:47:49.558299 | orchestrator | 2026-01-02 02:47:49 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:47:49.558350 | orchestrator | 2026-01-02 02:47:49 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:47:52.603283 | orchestrator | 2026-01-02 02:47:52 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:47:52.605018 | orchestrator | 2026-01-02 02:47:52 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:47:52.605176 | orchestrator | 2026-01-02 02:47:52 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:47:55.653286 | orchestrator | 2026-01-02 02:47:55 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:47:55.654807 | orchestrator | 2026-01-02 02:47:55 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:47:55.654933 | orchestrator | 2026-01-02 02:47:55 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:47:58.697179 | orchestrator | 2026-01-02 02:47:58 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:47:58.699780 | orchestrator | 2026-01-02 02:47:58 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:47:58.699829 | orchestrator | 2026-01-02 02:47:58 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:48:01.745997 | orchestrator | 2026-01-02 02:48:01 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:48:01.747205 | orchestrator | 2026-01-02 02:48:01 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:48:01.747250 | orchestrator | 2026-01-02 02:48:01 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:48:04.798938 | orchestrator | 2026-01-02 02:48:04 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:48:04.799940 | orchestrator | 2026-01-02 02:48:04 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:48:04.800140 | orchestrator | 2026-01-02 02:48:04 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:48:07.841367 | orchestrator | 2026-01-02 02:48:07 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:48:07.843553 | orchestrator | 2026-01-02 02:48:07 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:48:07.843703 | orchestrator | 2026-01-02 02:48:07 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:48:10.889193 | orchestrator | 2026-01-02 02:48:10 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:48:10.891643 | orchestrator | 2026-01-02 02:48:10 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:48:10.891686 | orchestrator | 2026-01-02 02:48:10 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:48:13.938887 | orchestrator | 2026-01-02 02:48:13 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:48:13.940764 | orchestrator | 2026-01-02 02:48:13 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:48:13.940833 | orchestrator | 2026-01-02 02:48:13 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:48:16.983713 | orchestrator | 2026-01-02 02:48:16 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:48:16.983929 | orchestrator | 2026-01-02 02:48:16 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:48:16.983943 | orchestrator | 2026-01-02 02:48:16 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:48:20.026807 | orchestrator | 2026-01-02 02:48:20 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:48:20.029179 | orchestrator | 2026-01-02 02:48:20 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:48:20.029228 | orchestrator | 2026-01-02 02:48:20 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:48:23.079541 | orchestrator | 2026-01-02 02:48:23 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:48:23.080206 | orchestrator | 2026-01-02 02:48:23 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:48:23.080459 | orchestrator | 2026-01-02 02:48:23 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:48:26.122176 | orchestrator | 2026-01-02 02:48:26 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:48:26.122388 | orchestrator | 2026-01-02 02:48:26 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:48:26.122567 | orchestrator | 2026-01-02 02:48:26 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:48:29.169731 | orchestrator | 2026-01-02 02:48:29 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:48:29.170971 | orchestrator | 2026-01-02 02:48:29 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:48:29.171131 | orchestrator | 2026-01-02 02:48:29 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:48:32.216880 | orchestrator | 2026-01-02 02:48:32 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:48:32.219222 | orchestrator | 2026-01-02 02:48:32 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:48:32.219307 | orchestrator | 2026-01-02 02:48:32 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:48:35.259239 | orchestrator | 2026-01-02 02:48:35 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:48:35.259944 | orchestrator | 2026-01-02 02:48:35 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:48:35.259982 | orchestrator | 2026-01-02 02:48:35 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:48:38.299363 | orchestrator | 2026-01-02 02:48:38 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:48:38.302190 | orchestrator | 2026-01-02 02:48:38 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:48:38.302262 | orchestrator | 2026-01-02 02:48:38 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:48:41.345049 | orchestrator | 2026-01-02 02:48:41 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:48:41.347092 | orchestrator | 2026-01-02 02:48:41 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:48:41.347176 | orchestrator | 2026-01-02 02:48:41 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:48:44.389222 | orchestrator | 2026-01-02 02:48:44 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:48:44.391768 | orchestrator | 2026-01-02 02:48:44 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:48:44.391884 | orchestrator | 2026-01-02 02:48:44 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:48:47.441839 | orchestrator | 2026-01-02 02:48:47 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:48:47.444353 | orchestrator | 2026-01-02 02:48:47 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:48:47.444491 | orchestrator | 2026-01-02 02:48:47 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:48:50.494118 | orchestrator | 2026-01-02 02:48:50 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:48:50.494615 | orchestrator | 2026-01-02 02:48:50 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:48:50.494732 | orchestrator | 2026-01-02 02:48:50 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:48:53.541672 | orchestrator | 2026-01-02 02:48:53 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:48:53.544279 | orchestrator | 2026-01-02 02:48:53 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:48:53.544362 | orchestrator | 2026-01-02 02:48:53 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:48:56.617175 | orchestrator | 2026-01-02 02:48:56 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:48:56.619808 | orchestrator | 2026-01-02 02:48:56 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:48:56.620452 | orchestrator | 2026-01-02 02:48:56 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:48:59.659660 | orchestrator | 2026-01-02 02:48:59 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:48:59.661016 | orchestrator | 2026-01-02 02:48:59 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:48:59.661184 | orchestrator | 2026-01-02 02:48:59 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:49:02.704653 | orchestrator | 2026-01-02 02:49:02 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:49:02.706666 | orchestrator | 2026-01-02 02:49:02 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:49:02.706720 | orchestrator | 2026-01-02 02:49:02 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:49:05.757270 | orchestrator | 2026-01-02 02:49:05 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:49:05.758711 | orchestrator | 2026-01-02 02:49:05 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:49:05.759216 | orchestrator | 2026-01-02 02:49:05 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:49:08.807485 | orchestrator | 2026-01-02 02:49:08 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:49:08.809837 | orchestrator | 2026-01-02 02:49:08 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:49:08.809981 | orchestrator | 2026-01-02 02:49:08 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:49:11.860021 | orchestrator | 2026-01-02 02:49:11 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:49:11.861360 | orchestrator | 2026-01-02 02:49:11 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:49:11.861632 | orchestrator | 2026-01-02 02:49:11 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:49:14.911487 | orchestrator | 2026-01-02 02:49:14 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:49:14.914894 | orchestrator | 2026-01-02 02:49:14 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:49:14.914970 | orchestrator | 2026-01-02 02:49:14 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:49:17.971785 | orchestrator | 2026-01-02 02:49:17 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:49:17.973702 | orchestrator | 2026-01-02 02:49:17 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:49:17.973745 | orchestrator | 2026-01-02 02:49:17 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:49:21.020324 | orchestrator | 2026-01-02 02:49:21 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:49:21.022322 | orchestrator | 2026-01-02 02:49:21 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:49:21.022351 | orchestrator | 2026-01-02 02:49:21 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:49:24.071121 | orchestrator | 2026-01-02 02:49:24 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:49:24.071329 | orchestrator | 2026-01-02 02:49:24 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:49:24.071349 | orchestrator | 2026-01-02 02:49:24 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:49:27.121114 | orchestrator | 2026-01-02 02:49:27 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:49:27.122837 | orchestrator | 2026-01-02 02:49:27 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:49:27.123235 | orchestrator | 2026-01-02 02:49:27 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:49:30.168735 | orchestrator | 2026-01-02 02:49:30 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:49:30.172003 | orchestrator | 2026-01-02 02:49:30 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:49:30.172147 | orchestrator | 2026-01-02 02:49:30 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:49:33.221903 | orchestrator | 2026-01-02 02:49:33 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:49:33.225277 | orchestrator | 2026-01-02 02:49:33 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:49:33.225464 | orchestrator | 2026-01-02 02:49:33 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:49:36.277727 | orchestrator | 2026-01-02 02:49:36 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:49:36.279311 | orchestrator | 2026-01-02 02:49:36 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:49:36.279616 | orchestrator | 2026-01-02 02:49:36 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:49:39.326288 | orchestrator | 2026-01-02 02:49:39 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:49:39.326509 | orchestrator | 2026-01-02 02:49:39 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:49:39.326532 | orchestrator | 2026-01-02 02:49:39 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:49:42.373099 | orchestrator | 2026-01-02 02:49:42 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:49:42.374459 | orchestrator | 2026-01-02 02:49:42 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:49:42.374496 | orchestrator | 2026-01-02 02:49:42 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:49:45.422417 | orchestrator | 2026-01-02 02:49:45 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:49:45.424545 | orchestrator | 2026-01-02 02:49:45 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:49:45.424658 | orchestrator | 2026-01-02 02:49:45 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:49:48.474306 | orchestrator | 2026-01-02 02:49:48 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:49:48.476061 | orchestrator | 2026-01-02 02:49:48 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:49:48.476213 | orchestrator | 2026-01-02 02:49:48 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:49:51.518687 | orchestrator | 2026-01-02 02:49:51 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:49:51.520609 | orchestrator | 2026-01-02 02:49:51 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:49:51.520711 | orchestrator | 2026-01-02 02:49:51 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:49:54.562995 | orchestrator | 2026-01-02 02:49:54 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:49:54.564753 | orchestrator | 2026-01-02 02:49:54 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:49:54.564776 | orchestrator | 2026-01-02 02:49:54 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:49:57.607223 | orchestrator | 2026-01-02 02:49:57 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:49:57.609201 | orchestrator | 2026-01-02 02:49:57 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:49:57.609252 | orchestrator | 2026-01-02 02:49:57 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:50:00.652669 | orchestrator | 2026-01-02 02:50:00 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:50:00.654408 | orchestrator | 2026-01-02 02:50:00 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:50:00.654568 | orchestrator | 2026-01-02 02:50:00 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:50:03.705468 | orchestrator | 2026-01-02 02:50:03 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:50:03.707754 | orchestrator | 2026-01-02 02:50:03 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:50:03.707800 | orchestrator | 2026-01-02 02:50:03 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:50:06.757090 | orchestrator | 2026-01-02 02:50:06 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:50:06.757878 | orchestrator | 2026-01-02 02:50:06 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:50:06.758145 | orchestrator | 2026-01-02 02:50:06 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:50:09.804947 | orchestrator | 2026-01-02 02:50:09 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:50:09.806206 | orchestrator | 2026-01-02 02:50:09 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:50:09.806482 | orchestrator | 2026-01-02 02:50:09 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:50:12.851377 | orchestrator | 2026-01-02 02:50:12 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:50:12.853162 | orchestrator | 2026-01-02 02:50:12 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:50:12.853208 | orchestrator | 2026-01-02 02:50:12 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:50:15.898219 | orchestrator | 2026-01-02 02:50:15 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:50:15.899672 | orchestrator | 2026-01-02 02:50:15 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:50:15.899698 | orchestrator | 2026-01-02 02:50:15 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:50:18.947113 | orchestrator | 2026-01-02 02:50:18 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:50:18.948499 | orchestrator | 2026-01-02 02:50:18 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:50:18.948547 | orchestrator | 2026-01-02 02:50:18 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:50:21.996637 | orchestrator | 2026-01-02 02:50:21 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:50:21.998346 | orchestrator | 2026-01-02 02:50:21 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:50:21.998378 | orchestrator | 2026-01-02 02:50:21 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:50:25.046978 | orchestrator | 2026-01-02 02:50:25 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:50:25.048938 | orchestrator | 2026-01-02 02:50:25 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:50:25.048978 | orchestrator | 2026-01-02 02:50:25 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:50:28.094227 | orchestrator | 2026-01-02 02:50:28 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:50:28.095450 | orchestrator | 2026-01-02 02:50:28 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:50:28.095510 | orchestrator | 2026-01-02 02:50:28 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:50:31.146319 | orchestrator | 2026-01-02 02:50:31 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:50:31.148230 | orchestrator | 2026-01-02 02:50:31 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:50:31.148314 | orchestrator | 2026-01-02 02:50:31 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:50:34.193583 | orchestrator | 2026-01-02 02:50:34 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:50:34.196515 | orchestrator | 2026-01-02 02:50:34 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:50:34.196821 | orchestrator | 2026-01-02 02:50:34 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:50:37.239313 | orchestrator | 2026-01-02 02:50:37 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:50:37.240849 | orchestrator | 2026-01-02 02:50:37 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:50:37.240914 | orchestrator | 2026-01-02 02:50:37 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:50:40.286828 | orchestrator | 2026-01-02 02:50:40 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:50:40.287691 | orchestrator | 2026-01-02 02:50:40 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:50:40.287739 | orchestrator | 2026-01-02 02:50:40 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:50:43.334749 | orchestrator | 2026-01-02 02:50:43 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:50:43.335968 | orchestrator | 2026-01-02 02:50:43 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:50:43.336001 | orchestrator | 2026-01-02 02:50:43 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:50:46.387612 | orchestrator | 2026-01-02 02:50:46 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:50:46.390348 | orchestrator | 2026-01-02 02:50:46 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:50:46.390950 | orchestrator | 2026-01-02 02:50:46 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:50:49.435989 | orchestrator | 2026-01-02 02:50:49 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:50:49.437400 | orchestrator | 2026-01-02 02:50:49 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:50:49.437566 | orchestrator | 2026-01-02 02:50:49 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:50:52.487600 | orchestrator | 2026-01-02 02:50:52 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:50:52.488992 | orchestrator | 2026-01-02 02:50:52 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:50:52.489034 | orchestrator | 2026-01-02 02:50:52 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:50:55.541968 | orchestrator | 2026-01-02 02:50:55 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:50:55.543852 | orchestrator | 2026-01-02 02:50:55 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:50:55.543950 | orchestrator | 2026-01-02 02:50:55 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:50:58.596704 | orchestrator | 2026-01-02 02:50:58 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:50:58.598206 | orchestrator | 2026-01-02 02:50:58 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:50:58.598386 | orchestrator | 2026-01-02 02:50:58 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:51:01.645735 | orchestrator | 2026-01-02 02:51:01 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:51:01.649058 | orchestrator | 2026-01-02 02:51:01 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:51:01.649119 | orchestrator | 2026-01-02 02:51:01 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:51:04.694277 | orchestrator | 2026-01-02 02:51:04 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:51:04.696607 | orchestrator | 2026-01-02 02:51:04 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:51:04.696654 | orchestrator | 2026-01-02 02:51:04 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:51:07.747839 | orchestrator | 2026-01-02 02:51:07 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:51:07.750478 | orchestrator | 2026-01-02 02:51:07 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:51:07.750558 | orchestrator | 2026-01-02 02:51:07 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:51:10.803839 | orchestrator | 2026-01-02 02:51:10 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:51:10.806219 | orchestrator | 2026-01-02 02:51:10 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:51:10.806258 | orchestrator | 2026-01-02 02:51:10 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:51:13.852982 | orchestrator | 2026-01-02 02:51:13 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:51:13.857027 | orchestrator | 2026-01-02 02:51:13 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:51:13.857101 | orchestrator | 2026-01-02 02:51:13 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:51:16.901973 | orchestrator | 2026-01-02 02:51:16 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:51:16.903863 | orchestrator | 2026-01-02 02:51:16 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:51:16.904040 | orchestrator | 2026-01-02 02:51:16 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:51:19.953725 | orchestrator | 2026-01-02 02:51:19 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:51:19.954991 | orchestrator | 2026-01-02 02:51:19 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:51:19.955026 | orchestrator | 2026-01-02 02:51:19 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:51:23.003285 | orchestrator | 2026-01-02 02:51:23 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:51:23.005021 | orchestrator | 2026-01-02 02:51:23 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:51:23.005057 | orchestrator | 2026-01-02 02:51:23 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:51:26.057564 | orchestrator | 2026-01-02 02:51:26 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:51:26.058272 | orchestrator | 2026-01-02 02:51:26 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:51:26.058464 | orchestrator | 2026-01-02 02:51:26 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:51:29.110414 | orchestrator | 2026-01-02 02:51:29 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:51:29.112266 | orchestrator | 2026-01-02 02:51:29 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:51:29.112385 | orchestrator | 2026-01-02 02:51:29 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:51:32.156877 | orchestrator | 2026-01-02 02:51:32 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:51:32.160307 | orchestrator | 2026-01-02 02:51:32 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:51:32.160357 | orchestrator | 2026-01-02 02:51:32 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:51:35.207179 | orchestrator | 2026-01-02 02:51:35 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:51:35.209000 | orchestrator | 2026-01-02 02:51:35 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:51:35.209051 | orchestrator | 2026-01-02 02:51:35 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:51:38.255330 | orchestrator | 2026-01-02 02:51:38 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:51:38.256065 | orchestrator | 2026-01-02 02:51:38 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:51:38.256120 | orchestrator | 2026-01-02 02:51:38 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:51:41.301444 | orchestrator | 2026-01-02 02:51:41 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:51:41.302535 | orchestrator | 2026-01-02 02:51:41 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:51:41.302573 | orchestrator | 2026-01-02 02:51:41 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:51:44.346462 | orchestrator | 2026-01-02 02:51:44 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:51:44.348434 | orchestrator | 2026-01-02 02:51:44 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:51:44.348479 | orchestrator | 2026-01-02 02:51:44 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:51:47.394580 | orchestrator | 2026-01-02 02:51:47 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:51:47.396575 | orchestrator | 2026-01-02 02:51:47 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:51:47.396663 | orchestrator | 2026-01-02 02:51:47 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:51:50.443265 | orchestrator | 2026-01-02 02:51:50 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:51:50.445373 | orchestrator | 2026-01-02 02:51:50 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:51:50.445430 | orchestrator | 2026-01-02 02:51:50 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:51:53.493250 | orchestrator | 2026-01-02 02:51:53 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:51:53.497088 | orchestrator | 2026-01-02 02:51:53 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:51:53.497191 | orchestrator | 2026-01-02 02:51:53 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:51:56.539447 | orchestrator | 2026-01-02 02:51:56 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:51:56.540987 | orchestrator | 2026-01-02 02:51:56 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:51:56.541171 | orchestrator | 2026-01-02 02:51:56 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:51:59.592984 | orchestrator | 2026-01-02 02:51:59 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:51:59.595320 | orchestrator | 2026-01-02 02:51:59 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:51:59.595375 | orchestrator | 2026-01-02 02:51:59 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:52:02.638916 | orchestrator | 2026-01-02 02:52:02 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:52:02.643915 | orchestrator | 2026-01-02 02:52:02 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:52:02.644282 | orchestrator | 2026-01-02 02:52:02 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:52:05.694187 | orchestrator | 2026-01-02 02:52:05 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:52:05.696852 | orchestrator | 2026-01-02 02:52:05 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:52:05.696941 | orchestrator | 2026-01-02 02:52:05 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:52:08.738102 | orchestrator | 2026-01-02 02:52:08 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:52:08.739156 | orchestrator | 2026-01-02 02:52:08 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:52:08.739188 | orchestrator | 2026-01-02 02:52:08 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:52:11.782960 | orchestrator | 2026-01-02 02:52:11 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:52:11.785084 | orchestrator | 2026-01-02 02:52:11 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:52:11.785211 | orchestrator | 2026-01-02 02:52:11 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:52:14.832794 | orchestrator | 2026-01-02 02:52:14 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:52:14.834117 | orchestrator | 2026-01-02 02:52:14 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:52:14.834184 | orchestrator | 2026-01-02 02:52:14 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:52:17.879149 | orchestrator | 2026-01-02 02:52:17 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:52:17.881026 | orchestrator | 2026-01-02 02:52:17 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:52:17.881184 | orchestrator | 2026-01-02 02:52:17 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:52:20.923408 | orchestrator | 2026-01-02 02:52:20 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:52:20.924952 | orchestrator | 2026-01-02 02:52:20 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:52:20.925099 | orchestrator | 2026-01-02 02:52:20 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:52:23.966154 | orchestrator | 2026-01-02 02:52:23 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:52:23.968077 | orchestrator | 2026-01-02 02:52:23 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:52:23.968121 | orchestrator | 2026-01-02 02:52:23 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:52:27.017335 | orchestrator | 2026-01-02 02:52:27 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:52:27.018673 | orchestrator | 2026-01-02 02:52:27 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:52:27.018702 | orchestrator | 2026-01-02 02:52:27 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:52:30.068048 | orchestrator | 2026-01-02 02:52:30 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:52:30.070064 | orchestrator | 2026-01-02 02:52:30 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:52:30.070112 | orchestrator | 2026-01-02 02:52:30 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:52:33.116248 | orchestrator | 2026-01-02 02:52:33 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:52:33.121428 | orchestrator | 2026-01-02 02:52:33 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:52:33.121569 | orchestrator | 2026-01-02 02:52:33 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:52:36.169480 | orchestrator | 2026-01-02 02:52:36 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:52:36.169752 | orchestrator | 2026-01-02 02:52:36 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:52:36.169780 | orchestrator | 2026-01-02 02:52:36 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:52:39.217769 | orchestrator | 2026-01-02 02:52:39 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:52:39.219560 | orchestrator | 2026-01-02 02:52:39 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:52:39.219607 | orchestrator | 2026-01-02 02:52:39 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:52:42.267255 | orchestrator | 2026-01-02 02:52:42 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:52:42.268892 | orchestrator | 2026-01-02 02:52:42 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:52:42.268941 | orchestrator | 2026-01-02 02:52:42 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:52:45.316134 | orchestrator | 2026-01-02 02:52:45 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:52:45.317361 | orchestrator | 2026-01-02 02:52:45 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:52:45.317466 | orchestrator | 2026-01-02 02:52:45 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:52:48.365735 | orchestrator | 2026-01-02 02:52:48 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:52:48.367747 | orchestrator | 2026-01-02 02:52:48 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:52:48.367811 | orchestrator | 2026-01-02 02:52:48 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:52:51.410627 | orchestrator | 2026-01-02 02:52:51 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:52:51.413323 | orchestrator | 2026-01-02 02:52:51 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:52:51.413438 | orchestrator | 2026-01-02 02:52:51 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:52:54.458440 | orchestrator | 2026-01-02 02:52:54 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:52:54.459842 | orchestrator | 2026-01-02 02:52:54 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:52:54.459869 | orchestrator | 2026-01-02 02:52:54 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:52:57.511498 | orchestrator | 2026-01-02 02:52:57 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:52:57.513572 | orchestrator | 2026-01-02 02:52:57 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:52:57.513740 | orchestrator | 2026-01-02 02:52:57 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:53:00.556240 | orchestrator | 2026-01-02 02:53:00 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:53:00.557927 | orchestrator | 2026-01-02 02:53:00 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:53:00.557982 | orchestrator | 2026-01-02 02:53:00 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:53:03.606285 | orchestrator | 2026-01-02 02:53:03 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:53:03.608096 | orchestrator | 2026-01-02 02:53:03 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:53:03.608189 | orchestrator | 2026-01-02 02:53:03 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:53:06.652374 | orchestrator | 2026-01-02 02:53:06 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:53:06.653613 | orchestrator | 2026-01-02 02:53:06 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:53:06.653743 | orchestrator | 2026-01-02 02:53:06 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:53:09.698334 | orchestrator | 2026-01-02 02:53:09 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:53:09.699866 | orchestrator | 2026-01-02 02:53:09 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:53:09.699941 | orchestrator | 2026-01-02 02:53:09 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:53:12.743996 | orchestrator | 2026-01-02 02:53:12 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:53:12.746367 | orchestrator | 2026-01-02 02:53:12 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:53:12.746401 | orchestrator | 2026-01-02 02:53:12 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:53:15.795407 | orchestrator | 2026-01-02 02:53:15 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:53:15.797825 | orchestrator | 2026-01-02 02:53:15 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:53:15.798292 | orchestrator | 2026-01-02 02:53:15 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:53:18.848614 | orchestrator | 2026-01-02 02:53:18 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:53:18.850112 | orchestrator | 2026-01-02 02:53:18 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:53:18.850183 | orchestrator | 2026-01-02 02:53:18 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:53:21.894804 | orchestrator | 2026-01-02 02:53:21 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:53:22.068551 | orchestrator | 2026-01-02 02:53:21 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:53:22.068611 | orchestrator | 2026-01-02 02:53:21 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:53:24.940860 | orchestrator | 2026-01-02 02:53:24 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:53:24.942799 | orchestrator | 2026-01-02 02:53:24 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:53:24.942864 | orchestrator | 2026-01-02 02:53:24 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:53:27.989723 | orchestrator | 2026-01-02 02:53:27 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:53:27.994565 | orchestrator | 2026-01-02 02:53:27 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:53:27.994651 | orchestrator | 2026-01-02 02:53:27 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:53:31.037444 | orchestrator | 2026-01-02 02:53:31 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:53:31.039778 | orchestrator | 2026-01-02 02:53:31 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:53:31.039846 | orchestrator | 2026-01-02 02:53:31 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:53:34.085692 | orchestrator | 2026-01-02 02:53:34 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:53:34.088217 | orchestrator | 2026-01-02 02:53:34 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:53:34.088291 | orchestrator | 2026-01-02 02:53:34 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:53:37.131726 | orchestrator | 2026-01-02 02:53:37 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:53:37.133258 | orchestrator | 2026-01-02 02:53:37 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:53:37.133307 | orchestrator | 2026-01-02 02:53:37 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:53:40.180771 | orchestrator | 2026-01-02 02:53:40 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:53:40.182586 | orchestrator | 2026-01-02 02:53:40 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:53:40.182619 | orchestrator | 2026-01-02 02:53:40 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:53:43.225288 | orchestrator | 2026-01-02 02:53:43 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:53:43.227619 | orchestrator | 2026-01-02 02:53:43 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:53:43.227670 | orchestrator | 2026-01-02 02:53:43 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:53:46.273560 | orchestrator | 2026-01-02 02:53:46 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:53:46.275614 | orchestrator | 2026-01-02 02:53:46 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:53:46.275824 | orchestrator | 2026-01-02 02:53:46 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:53:49.320865 | orchestrator | 2026-01-02 02:53:49 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:53:49.322771 | orchestrator | 2026-01-02 02:53:49 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:53:49.322811 | orchestrator | 2026-01-02 02:53:49 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:53:52.370426 | orchestrator | 2026-01-02 02:53:52 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:53:52.371690 | orchestrator | 2026-01-02 02:53:52 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:53:52.371754 | orchestrator | 2026-01-02 02:53:52 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:53:55.420087 | orchestrator | 2026-01-02 02:53:55 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:53:55.421733 | orchestrator | 2026-01-02 02:53:55 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:53:55.421764 | orchestrator | 2026-01-02 02:53:55 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:53:58.469960 | orchestrator | 2026-01-02 02:53:58 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:53:58.471812 | orchestrator | 2026-01-02 02:53:58 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:53:58.471864 | orchestrator | 2026-01-02 02:53:58 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:54:01.517187 | orchestrator | 2026-01-02 02:54:01 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:54:01.518818 | orchestrator | 2026-01-02 02:54:01 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:54:01.518852 | orchestrator | 2026-01-02 02:54:01 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:54:04.566097 | orchestrator | 2026-01-02 02:54:04 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:54:04.567435 | orchestrator | 2026-01-02 02:54:04 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:54:04.567474 | orchestrator | 2026-01-02 02:54:04 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:54:07.618755 | orchestrator | 2026-01-02 02:54:07 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:54:07.620635 | orchestrator | 2026-01-02 02:54:07 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:54:07.620778 | orchestrator | 2026-01-02 02:54:07 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:54:10.669481 | orchestrator | 2026-01-02 02:54:10 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:54:10.671007 | orchestrator | 2026-01-02 02:54:10 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:54:10.671111 | orchestrator | 2026-01-02 02:54:10 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:54:13.718903 | orchestrator | 2026-01-02 02:54:13 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:54:13.720620 | orchestrator | 2026-01-02 02:54:13 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:54:13.720751 | orchestrator | 2026-01-02 02:54:13 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:54:16.769039 | orchestrator | 2026-01-02 02:54:16 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:54:16.769910 | orchestrator | 2026-01-02 02:54:16 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:54:16.770090 | orchestrator | 2026-01-02 02:54:16 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:54:19.815750 | orchestrator | 2026-01-02 02:54:19 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:54:19.818448 | orchestrator | 2026-01-02 02:54:19 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:54:19.818523 | orchestrator | 2026-01-02 02:54:19 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:54:22.861619 | orchestrator | 2026-01-02 02:54:22 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:54:22.863882 | orchestrator | 2026-01-02 02:54:22 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:54:22.863935 | orchestrator | 2026-01-02 02:54:22 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:54:25.912645 | orchestrator | 2026-01-02 02:54:25 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:54:25.914549 | orchestrator | 2026-01-02 02:54:25 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:54:25.914865 | orchestrator | 2026-01-02 02:54:25 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:54:28.960410 | orchestrator | 2026-01-02 02:54:28 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:54:28.962206 | orchestrator | 2026-01-02 02:54:28 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:54:28.962387 | orchestrator | 2026-01-02 02:54:28 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:54:32.011408 | orchestrator | 2026-01-02 02:54:32 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:54:32.014237 | orchestrator | 2026-01-02 02:54:32 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:54:32.014269 | orchestrator | 2026-01-02 02:54:32 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:54:35.057536 | orchestrator | 2026-01-02 02:54:35 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:54:35.059016 | orchestrator | 2026-01-02 02:54:35 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:54:35.059074 | orchestrator | 2026-01-02 02:54:35 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:54:38.099446 | orchestrator | 2026-01-02 02:54:38 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:54:38.101000 | orchestrator | 2026-01-02 02:54:38 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:54:38.101052 | orchestrator | 2026-01-02 02:54:38 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:54:41.147291 | orchestrator | 2026-01-02 02:54:41 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:54:41.149876 | orchestrator | 2026-01-02 02:54:41 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:54:41.149982 | orchestrator | 2026-01-02 02:54:41 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:54:44.198640 | orchestrator | 2026-01-02 02:54:44 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:54:44.200089 | orchestrator | 2026-01-02 02:54:44 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:54:44.200127 | orchestrator | 2026-01-02 02:54:44 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:54:47.247091 | orchestrator | 2026-01-02 02:54:47 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:54:47.248829 | orchestrator | 2026-01-02 02:54:47 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:54:47.248874 | orchestrator | 2026-01-02 02:54:47 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:54:50.299119 | orchestrator | 2026-01-02 02:54:50 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:54:50.301693 | orchestrator | 2026-01-02 02:54:50 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:54:50.301731 | orchestrator | 2026-01-02 02:54:50 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:54:53.358242 | orchestrator | 2026-01-02 02:54:53 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:54:53.359767 | orchestrator | 2026-01-02 02:54:53 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:54:53.359995 | orchestrator | 2026-01-02 02:54:53 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:54:56.407314 | orchestrator | 2026-01-02 02:54:56 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:54:56.409326 | orchestrator | 2026-01-02 02:54:56 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:54:56.409354 | orchestrator | 2026-01-02 02:54:56 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:54:59.456166 | orchestrator | 2026-01-02 02:54:59 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:54:59.457812 | orchestrator | 2026-01-02 02:54:59 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:54:59.457889 | orchestrator | 2026-01-02 02:54:59 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:55:02.503890 | orchestrator | 2026-01-02 02:55:02 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:55:02.505085 | orchestrator | 2026-01-02 02:55:02 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:55:02.505128 | orchestrator | 2026-01-02 02:55:02 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:55:05.554338 | orchestrator | 2026-01-02 02:55:05 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:55:05.555719 | orchestrator | 2026-01-02 02:55:05 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:55:05.555829 | orchestrator | 2026-01-02 02:55:05 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:55:08.604549 | orchestrator | 2026-01-02 02:55:08 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:55:08.605725 | orchestrator | 2026-01-02 02:55:08 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:55:08.605756 | orchestrator | 2026-01-02 02:55:08 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:55:11.652689 | orchestrator | 2026-01-02 02:55:11 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:55:11.655711 | orchestrator | 2026-01-02 02:55:11 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:55:11.655767 | orchestrator | 2026-01-02 02:55:11 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:55:14.703193 | orchestrator | 2026-01-02 02:55:14 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:55:14.704977 | orchestrator | 2026-01-02 02:55:14 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:55:14.705019 | orchestrator | 2026-01-02 02:55:14 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:55:17.749762 | orchestrator | 2026-01-02 02:55:17 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:55:17.751030 | orchestrator | 2026-01-02 02:55:17 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:55:17.751151 | orchestrator | 2026-01-02 02:55:17 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:55:20.800282 | orchestrator | 2026-01-02 02:55:20 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:55:20.801985 | orchestrator | 2026-01-02 02:55:20 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:55:20.802091 | orchestrator | 2026-01-02 02:55:20 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:55:23.849409 | orchestrator | 2026-01-02 02:55:23 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:55:23.851256 | orchestrator | 2026-01-02 02:55:23 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:55:23.851315 | orchestrator | 2026-01-02 02:55:23 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:55:26.902773 | orchestrator | 2026-01-02 02:55:26 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:55:26.904783 | orchestrator | 2026-01-02 02:55:26 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:55:26.904830 | orchestrator | 2026-01-02 02:55:26 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:55:29.954297 | orchestrator | 2026-01-02 02:55:29 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:55:29.956374 | orchestrator | 2026-01-02 02:55:29 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:55:29.956464 | orchestrator | 2026-01-02 02:55:29 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:55:32.998691 | orchestrator | 2026-01-02 02:55:32 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:55:33.001672 | orchestrator | 2026-01-02 02:55:33 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:55:33.002199 | orchestrator | 2026-01-02 02:55:33 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:55:36.044871 | orchestrator | 2026-01-02 02:55:36 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:55:36.046965 | orchestrator | 2026-01-02 02:55:36 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:55:36.047050 | orchestrator | 2026-01-02 02:55:36 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:55:39.086559 | orchestrator | 2026-01-02 02:55:39 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:55:39.087406 | orchestrator | 2026-01-02 02:55:39 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:55:39.087447 | orchestrator | 2026-01-02 02:55:39 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:55:42.123767 | orchestrator | 2026-01-02 02:55:42 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:55:42.124525 | orchestrator | 2026-01-02 02:55:42 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:55:42.124557 | orchestrator | 2026-01-02 02:55:42 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:55:45.172312 | orchestrator | 2026-01-02 02:55:45 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:55:45.173595 | orchestrator | 2026-01-02 02:55:45 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:55:45.173852 | orchestrator | 2026-01-02 02:55:45 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:55:48.222509 | orchestrator | 2026-01-02 02:55:48 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:55:48.225060 | orchestrator | 2026-01-02 02:55:48 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:55:48.225102 | orchestrator | 2026-01-02 02:55:48 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:55:51.275765 | orchestrator | 2026-01-02 02:55:51 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:55:51.277064 | orchestrator | 2026-01-02 02:55:51 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:55:51.277287 | orchestrator | 2026-01-02 02:55:51 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:55:54.321917 | orchestrator | 2026-01-02 02:55:54 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:55:54.324833 | orchestrator | 2026-01-02 02:55:54 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:55:54.324914 | orchestrator | 2026-01-02 02:55:54 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:55:57.371894 | orchestrator | 2026-01-02 02:55:57 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:55:57.373750 | orchestrator | 2026-01-02 02:55:57 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:55:57.373855 | orchestrator | 2026-01-02 02:55:57 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:56:00.423548 | orchestrator | 2026-01-02 02:56:00 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:56:00.425807 | orchestrator | 2026-01-02 02:56:00 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:56:00.425889 | orchestrator | 2026-01-02 02:56:00 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:56:03.474357 | orchestrator | 2026-01-02 02:56:03 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:56:03.477497 | orchestrator | 2026-01-02 02:56:03 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:56:03.477620 | orchestrator | 2026-01-02 02:56:03 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:56:06.528758 | orchestrator | 2026-01-02 02:56:06 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:56:06.531665 | orchestrator | 2026-01-02 02:56:06 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:56:06.531999 | orchestrator | 2026-01-02 02:56:06 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:56:09.575306 | orchestrator | 2026-01-02 02:56:09 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:56:09.578270 | orchestrator | 2026-01-02 02:56:09 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:56:09.578327 | orchestrator | 2026-01-02 02:56:09 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:56:12.621174 | orchestrator | 2026-01-02 02:56:12 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:56:12.623138 | orchestrator | 2026-01-02 02:56:12 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:56:12.623192 | orchestrator | 2026-01-02 02:56:12 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:56:15.672523 | orchestrator | 2026-01-02 02:56:15 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:56:15.673492 | orchestrator | 2026-01-02 02:56:15 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:56:15.673561 | orchestrator | 2026-01-02 02:56:15 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:56:18.718614 | orchestrator | 2026-01-02 02:56:18 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:56:18.719867 | orchestrator | 2026-01-02 02:56:18 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:56:18.720053 | orchestrator | 2026-01-02 02:56:18 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:56:21.764205 | orchestrator | 2026-01-02 02:56:21 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:56:21.765801 | orchestrator | 2026-01-02 02:56:21 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:56:21.765894 | orchestrator | 2026-01-02 02:56:21 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:56:24.809283 | orchestrator | 2026-01-02 02:56:24 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:56:24.811081 | orchestrator | 2026-01-02 02:56:24 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:56:24.811117 | orchestrator | 2026-01-02 02:56:24 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:56:27.851415 | orchestrator | 2026-01-02 02:56:27 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:56:27.853987 | orchestrator | 2026-01-02 02:56:27 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:56:27.854199 | orchestrator | 2026-01-02 02:56:27 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:56:30.904834 | orchestrator | 2026-01-02 02:56:30 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:56:30.906002 | orchestrator | 2026-01-02 02:56:30 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:56:30.906147 | orchestrator | 2026-01-02 02:56:30 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:56:33.950202 | orchestrator | 2026-01-02 02:56:33 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:56:33.951612 | orchestrator | 2026-01-02 02:56:33 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:56:33.951634 | orchestrator | 2026-01-02 02:56:33 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:56:36.996269 | orchestrator | 2026-01-02 02:56:36 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:56:36.997841 | orchestrator | 2026-01-02 02:56:36 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:56:36.998077 | orchestrator | 2026-01-02 02:56:36 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:56:40.044750 | orchestrator | 2026-01-02 02:56:40 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:56:40.046493 | orchestrator | 2026-01-02 02:56:40 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:56:40.046546 | orchestrator | 2026-01-02 02:56:40 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:56:43.092744 | orchestrator | 2026-01-02 02:56:43 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:56:43.095280 | orchestrator | 2026-01-02 02:56:43 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:56:43.095365 | orchestrator | 2026-01-02 02:56:43 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:56:46.138771 | orchestrator | 2026-01-02 02:56:46 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:56:46.139774 | orchestrator | 2026-01-02 02:56:46 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:56:46.139832 | orchestrator | 2026-01-02 02:56:46 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:56:49.185433 | orchestrator | 2026-01-02 02:56:49 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:56:49.187048 | orchestrator | 2026-01-02 02:56:49 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:56:49.187184 | orchestrator | 2026-01-02 02:56:49 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:56:52.230357 | orchestrator | 2026-01-02 02:56:52 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:56:52.231784 | orchestrator | 2026-01-02 02:56:52 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:56:52.231814 | orchestrator | 2026-01-02 02:56:52 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:56:55.267644 | orchestrator | 2026-01-02 02:56:55 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:56:55.268818 | orchestrator | 2026-01-02 02:56:55 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:56:55.268862 | orchestrator | 2026-01-02 02:56:55 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:56:58.312538 | orchestrator | 2026-01-02 02:56:58 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:56:58.313568 | orchestrator | 2026-01-02 02:56:58 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:56:58.313606 | orchestrator | 2026-01-02 02:56:58 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:57:01.356218 | orchestrator | 2026-01-02 02:57:01 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:57:01.358118 | orchestrator | 2026-01-02 02:57:01 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:57:01.358158 | orchestrator | 2026-01-02 02:57:01 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:57:04.399698 | orchestrator | 2026-01-02 02:57:04 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:57:04.401763 | orchestrator | 2026-01-02 02:57:04 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:57:04.401782 | orchestrator | 2026-01-02 02:57:04 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:57:07.448193 | orchestrator | 2026-01-02 02:57:07 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:57:07.450442 | orchestrator | 2026-01-02 02:57:07 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:57:07.450469 | orchestrator | 2026-01-02 02:57:07 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:57:10.495211 | orchestrator | 2026-01-02 02:57:10 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:57:10.496415 | orchestrator | 2026-01-02 02:57:10 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:57:10.496588 | orchestrator | 2026-01-02 02:57:10 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:57:13.544189 | orchestrator | 2026-01-02 02:57:13 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:57:13.545185 | orchestrator | 2026-01-02 02:57:13 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:57:13.545219 | orchestrator | 2026-01-02 02:57:13 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:57:16.588856 | orchestrator | 2026-01-02 02:57:16 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:57:16.590472 | orchestrator | 2026-01-02 02:57:16 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:57:16.590621 | orchestrator | 2026-01-02 02:57:16 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:57:19.635137 | orchestrator | 2026-01-02 02:57:19 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:57:19.636686 | orchestrator | 2026-01-02 02:57:19 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:57:19.636852 | orchestrator | 2026-01-02 02:57:19 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:57:22.683135 | orchestrator | 2026-01-02 02:57:22 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:57:22.685146 | orchestrator | 2026-01-02 02:57:22 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:57:22.685255 | orchestrator | 2026-01-02 02:57:22 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:57:25.728050 | orchestrator | 2026-01-02 02:57:25 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:57:25.730447 | orchestrator | 2026-01-02 02:57:25 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:57:25.730487 | orchestrator | 2026-01-02 02:57:25 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:57:28.774847 | orchestrator | 2026-01-02 02:57:28 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:57:28.776215 | orchestrator | 2026-01-02 02:57:28 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:57:28.776274 | orchestrator | 2026-01-02 02:57:28 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:57:31.819597 | orchestrator | 2026-01-02 02:57:31 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:57:31.820849 | orchestrator | 2026-01-02 02:57:31 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:57:31.821030 | orchestrator | 2026-01-02 02:57:31 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:57:34.866754 | orchestrator | 2026-01-02 02:57:34 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:57:34.868621 | orchestrator | 2026-01-02 02:57:34 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:57:34.868675 | orchestrator | 2026-01-02 02:57:34 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:57:37.909716 | orchestrator | 2026-01-02 02:57:37 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:57:37.910670 | orchestrator | 2026-01-02 02:57:37 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:57:37.910754 | orchestrator | 2026-01-02 02:57:37 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:57:40.956348 | orchestrator | 2026-01-02 02:57:40 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:57:40.957345 | orchestrator | 2026-01-02 02:57:40 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:57:40.957429 | orchestrator | 2026-01-02 02:57:40 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:57:44.006413 | orchestrator | 2026-01-02 02:57:44 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:57:44.008823 | orchestrator | 2026-01-02 02:57:44 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:57:44.008897 | orchestrator | 2026-01-02 02:57:44 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:57:47.050598 | orchestrator | 2026-01-02 02:57:47 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:57:47.051752 | orchestrator | 2026-01-02 02:57:47 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:57:47.051860 | orchestrator | 2026-01-02 02:57:47 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:57:50.105382 | orchestrator | 2026-01-02 02:57:50 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:57:50.106755 | orchestrator | 2026-01-02 02:57:50 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:57:50.106809 | orchestrator | 2026-01-02 02:57:50 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:57:53.155169 | orchestrator | 2026-01-02 02:57:53 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:57:53.156998 | orchestrator | 2026-01-02 02:57:53 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:57:53.157030 | orchestrator | 2026-01-02 02:57:53 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:57:56.192872 | orchestrator | 2026-01-02 02:57:56 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:57:56.194678 | orchestrator | 2026-01-02 02:57:56 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:57:56.194714 | orchestrator | 2026-01-02 02:57:56 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:57:59.233704 | orchestrator | 2026-01-02 02:57:59 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:57:59.234766 | orchestrator | 2026-01-02 02:57:59 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:57:59.234807 | orchestrator | 2026-01-02 02:57:59 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:58:02.278733 | orchestrator | 2026-01-02 02:58:02 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:58:02.280532 | orchestrator | 2026-01-02 02:58:02 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:58:02.280627 | orchestrator | 2026-01-02 02:58:02 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:58:05.326500 | orchestrator | 2026-01-02 02:58:05 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:58:05.328270 | orchestrator | 2026-01-02 02:58:05 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:58:05.328323 | orchestrator | 2026-01-02 02:58:05 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:58:08.375334 | orchestrator | 2026-01-02 02:58:08 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:58:08.378918 | orchestrator | 2026-01-02 02:58:08 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:58:08.379504 | orchestrator | 2026-01-02 02:58:08 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:58:11.417537 | orchestrator | 2026-01-02 02:58:11 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:58:11.419261 | orchestrator | 2026-01-02 02:58:11 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:58:11.419358 | orchestrator | 2026-01-02 02:58:11 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:58:14.463041 | orchestrator | 2026-01-02 02:58:14 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:58:14.464066 | orchestrator | 2026-01-02 02:58:14 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:58:14.464098 | orchestrator | 2026-01-02 02:58:14 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:58:17.511637 | orchestrator | 2026-01-02 02:58:17 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:58:17.515586 | orchestrator | 2026-01-02 02:58:17 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:58:17.515713 | orchestrator | 2026-01-02 02:58:17 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:58:20.566415 | orchestrator | 2026-01-02 02:58:20 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:58:20.567380 | orchestrator | 2026-01-02 02:58:20 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:58:20.567467 | orchestrator | 2026-01-02 02:58:20 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:58:23.615463 | orchestrator | 2026-01-02 02:58:23 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:58:23.617971 | orchestrator | 2026-01-02 02:58:23 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:58:23.618132 | orchestrator | 2026-01-02 02:58:23 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:58:26.668375 | orchestrator | 2026-01-02 02:58:26 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:58:26.672292 | orchestrator | 2026-01-02 02:58:26 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:58:26.672385 | orchestrator | 2026-01-02 02:58:26 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:58:29.723163 | orchestrator | 2026-01-02 02:58:29 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:58:29.725019 | orchestrator | 2026-01-02 02:58:29 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:58:29.725060 | orchestrator | 2026-01-02 02:58:29 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:58:32.772993 | orchestrator | 2026-01-02 02:58:32 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:58:32.775487 | orchestrator | 2026-01-02 02:58:32 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:58:32.775526 | orchestrator | 2026-01-02 02:58:32 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:58:35.824940 | orchestrator | 2026-01-02 02:58:35 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:58:35.827335 | orchestrator | 2026-01-02 02:58:35 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:58:35.827413 | orchestrator | 2026-01-02 02:58:35 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:58:38.875919 | orchestrator | 2026-01-02 02:58:38 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:58:38.877770 | orchestrator | 2026-01-02 02:58:38 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:58:38.877808 | orchestrator | 2026-01-02 02:58:38 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:58:41.927364 | orchestrator | 2026-01-02 02:58:41 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:58:41.928649 | orchestrator | 2026-01-02 02:58:41 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:58:41.928691 | orchestrator | 2026-01-02 02:58:41 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:58:44.976656 | orchestrator | 2026-01-02 02:58:44 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:58:44.978303 | orchestrator | 2026-01-02 02:58:44 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:58:44.978361 | orchestrator | 2026-01-02 02:58:44 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:58:48.031043 | orchestrator | 2026-01-02 02:58:48 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:58:48.032177 | orchestrator | 2026-01-02 02:58:48 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:58:48.032534 | orchestrator | 2026-01-02 02:58:48 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:58:51.085219 | orchestrator | 2026-01-02 02:58:51 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:58:51.087385 | orchestrator | 2026-01-02 02:58:51 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:58:51.087436 | orchestrator | 2026-01-02 02:58:51 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:58:54.134300 | orchestrator | 2026-01-02 02:58:54 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:58:54.135579 | orchestrator | 2026-01-02 02:58:54 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:58:54.135651 | orchestrator | 2026-01-02 02:58:54 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:58:57.189633 | orchestrator | 2026-01-02 02:58:57 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:58:57.191313 | orchestrator | 2026-01-02 02:58:57 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:58:57.191376 | orchestrator | 2026-01-02 02:58:57 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:59:00.240365 | orchestrator | 2026-01-02 02:59:00 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:59:00.241170 | orchestrator | 2026-01-02 02:59:00 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:59:00.241204 | orchestrator | 2026-01-02 02:59:00 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:59:03.292910 | orchestrator | 2026-01-02 02:59:03 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:59:03.294869 | orchestrator | 2026-01-02 02:59:03 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:59:03.294926 | orchestrator | 2026-01-02 02:59:03 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:59:06.337302 | orchestrator | 2026-01-02 02:59:06 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:59:06.339002 | orchestrator | 2026-01-02 02:59:06 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:59:06.339151 | orchestrator | 2026-01-02 02:59:06 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:59:09.393000 | orchestrator | 2026-01-02 02:59:09 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:59:09.394574 | orchestrator | 2026-01-02 02:59:09 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:59:09.394643 | orchestrator | 2026-01-02 02:59:09 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:59:12.441627 | orchestrator | 2026-01-02 02:59:12 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:59:12.443780 | orchestrator | 2026-01-02 02:59:12 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:59:12.443880 | orchestrator | 2026-01-02 02:59:12 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:59:15.488818 | orchestrator | 2026-01-02 02:59:15 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:59:15.489115 | orchestrator | 2026-01-02 02:59:15 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:59:15.489204 | orchestrator | 2026-01-02 02:59:15 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:59:18.536802 | orchestrator | 2026-01-02 02:59:18 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:59:18.540312 | orchestrator | 2026-01-02 02:59:18 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:59:18.540385 | orchestrator | 2026-01-02 02:59:18 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:59:21.588306 | orchestrator | 2026-01-02 02:59:21 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:59:21.589509 | orchestrator | 2026-01-02 02:59:21 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:59:21.589923 | orchestrator | 2026-01-02 02:59:21 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:59:24.638116 | orchestrator | 2026-01-02 02:59:24 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:59:24.639083 | orchestrator | 2026-01-02 02:59:24 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:59:24.639250 | orchestrator | 2026-01-02 02:59:24 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:59:27.684230 | orchestrator | 2026-01-02 02:59:27 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:59:27.685612 | orchestrator | 2026-01-02 02:59:27 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:59:27.685702 | orchestrator | 2026-01-02 02:59:27 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:59:30.734000 | orchestrator | 2026-01-02 02:59:30 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:59:30.735214 | orchestrator | 2026-01-02 02:59:30 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:59:30.735264 | orchestrator | 2026-01-02 02:59:30 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:59:33.781571 | orchestrator | 2026-01-02 02:59:33 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:59:33.782888 | orchestrator | 2026-01-02 02:59:33 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:59:33.782946 | orchestrator | 2026-01-02 02:59:33 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:59:36.826695 | orchestrator | 2026-01-02 02:59:36 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:59:36.829214 | orchestrator | 2026-01-02 02:59:36 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:59:36.829385 | orchestrator | 2026-01-02 02:59:36 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:59:39.875063 | orchestrator | 2026-01-02 02:59:39 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:59:39.877817 | orchestrator | 2026-01-02 02:59:39 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:59:39.877882 | orchestrator | 2026-01-02 02:59:39 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:59:42.922914 | orchestrator | 2026-01-02 02:59:42 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:59:42.925025 | orchestrator | 2026-01-02 02:59:42 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:59:42.925080 | orchestrator | 2026-01-02 02:59:42 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:59:45.970564 | orchestrator | 2026-01-02 02:59:45 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:59:45.972410 | orchestrator | 2026-01-02 02:59:45 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:59:45.972461 | orchestrator | 2026-01-02 02:59:45 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:59:49.018676 | orchestrator | 2026-01-02 02:59:49 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:59:49.019692 | orchestrator | 2026-01-02 02:59:49 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:59:49.019812 | orchestrator | 2026-01-02 02:59:49 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:59:52.063038 | orchestrator | 2026-01-02 02:59:52 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:59:52.063195 | orchestrator | 2026-01-02 02:59:52 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:59:52.063210 | orchestrator | 2026-01-02 02:59:52 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:59:55.115808 | orchestrator | 2026-01-02 02:59:55 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:59:55.117057 | orchestrator | 2026-01-02 02:59:55 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:59:55.117114 | orchestrator | 2026-01-02 02:59:55 | INFO  | Wait 1 second(s) until the next check 2026-01-02 02:59:58.162740 | orchestrator | 2026-01-02 02:59:58 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 02:59:58.164305 | orchestrator | 2026-01-02 02:59:58 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 02:59:58.164347 | orchestrator | 2026-01-02 02:59:58 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:00:01.206496 | orchestrator | 2026-01-02 03:00:01 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:00:01.207721 | orchestrator | 2026-01-02 03:00:01 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:00:01.207864 | orchestrator | 2026-01-02 03:00:01 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:00:04.263177 | orchestrator | 2026-01-02 03:00:04 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:00:04.265012 | orchestrator | 2026-01-02 03:00:04 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:00:04.265108 | orchestrator | 2026-01-02 03:00:04 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:00:07.313573 | orchestrator | 2026-01-02 03:00:07 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:00:07.315478 | orchestrator | 2026-01-02 03:00:07 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:00:07.315628 | orchestrator | 2026-01-02 03:00:07 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:00:10.363277 | orchestrator | 2026-01-02 03:00:10 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:00:10.364640 | orchestrator | 2026-01-02 03:00:10 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:00:10.364694 | orchestrator | 2026-01-02 03:00:10 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:00:13.411488 | orchestrator | 2026-01-02 03:00:13 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:00:13.413041 | orchestrator | 2026-01-02 03:00:13 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:00:13.413074 | orchestrator | 2026-01-02 03:00:13 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:00:16.453248 | orchestrator | 2026-01-02 03:00:16 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:00:16.454416 | orchestrator | 2026-01-02 03:00:16 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:00:16.454471 | orchestrator | 2026-01-02 03:00:16 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:00:19.504546 | orchestrator | 2026-01-02 03:00:19 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:00:19.505665 | orchestrator | 2026-01-02 03:00:19 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:00:19.505732 | orchestrator | 2026-01-02 03:00:19 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:00:22.549703 | orchestrator | 2026-01-02 03:00:22 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:00:22.551371 | orchestrator | 2026-01-02 03:00:22 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:00:22.551455 | orchestrator | 2026-01-02 03:00:22 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:00:25.603922 | orchestrator | 2026-01-02 03:00:25 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:00:25.604276 | orchestrator | 2026-01-02 03:00:25 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:00:25.604302 | orchestrator | 2026-01-02 03:00:25 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:00:28.648997 | orchestrator | 2026-01-02 03:00:28 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:00:28.649741 | orchestrator | 2026-01-02 03:00:28 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:00:28.649776 | orchestrator | 2026-01-02 03:00:28 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:00:31.695013 | orchestrator | 2026-01-02 03:00:31 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:00:31.697153 | orchestrator | 2026-01-02 03:00:31 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:00:31.697234 | orchestrator | 2026-01-02 03:00:31 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:00:34.746318 | orchestrator | 2026-01-02 03:00:34 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:00:34.747376 | orchestrator | 2026-01-02 03:00:34 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:00:34.747459 | orchestrator | 2026-01-02 03:00:34 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:00:37.793155 | orchestrator | 2026-01-02 03:00:37 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:00:37.794173 | orchestrator | 2026-01-02 03:00:37 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:00:37.794214 | orchestrator | 2026-01-02 03:00:37 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:00:40.842181 | orchestrator | 2026-01-02 03:00:40 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:00:40.842871 | orchestrator | 2026-01-02 03:00:40 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:00:40.842904 | orchestrator | 2026-01-02 03:00:40 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:00:43.888776 | orchestrator | 2026-01-02 03:00:43 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:00:43.889868 | orchestrator | 2026-01-02 03:00:43 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:00:43.889890 | orchestrator | 2026-01-02 03:00:43 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:00:46.935376 | orchestrator | 2026-01-02 03:00:46 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:00:46.936590 | orchestrator | 2026-01-02 03:00:46 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:00:46.936671 | orchestrator | 2026-01-02 03:00:46 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:00:49.984732 | orchestrator | 2026-01-02 03:00:49 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:00:49.986577 | orchestrator | 2026-01-02 03:00:49 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:00:49.986979 | orchestrator | 2026-01-02 03:00:49 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:00:53.040293 | orchestrator | 2026-01-02 03:00:53 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:00:53.041817 | orchestrator | 2026-01-02 03:00:53 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:00:53.041886 | orchestrator | 2026-01-02 03:00:53 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:00:56.098621 | orchestrator | 2026-01-02 03:00:56 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:00:56.099580 | orchestrator | 2026-01-02 03:00:56 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:00:56.099632 | orchestrator | 2026-01-02 03:00:56 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:00:59.140713 | orchestrator | 2026-01-02 03:00:59 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:00:59.142457 | orchestrator | 2026-01-02 03:00:59 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:00:59.142582 | orchestrator | 2026-01-02 03:00:59 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:01:02.189884 | orchestrator | 2026-01-02 03:01:02 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:01:02.191324 | orchestrator | 2026-01-02 03:01:02 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:01:02.191652 | orchestrator | 2026-01-02 03:01:02 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:01:05.237920 | orchestrator | 2026-01-02 03:01:05 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:01:05.238123 | orchestrator | 2026-01-02 03:01:05 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:01:05.238252 | orchestrator | 2026-01-02 03:01:05 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:01:08.280901 | orchestrator | 2026-01-02 03:01:08 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:01:08.282583 | orchestrator | 2026-01-02 03:01:08 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:01:08.282763 | orchestrator | 2026-01-02 03:01:08 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:01:11.327289 | orchestrator | 2026-01-02 03:01:11 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:01:11.329013 | orchestrator | 2026-01-02 03:01:11 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:01:11.329277 | orchestrator | 2026-01-02 03:01:11 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:01:14.368085 | orchestrator | 2026-01-02 03:01:14 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:01:14.369747 | orchestrator | 2026-01-02 03:01:14 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:01:14.369797 | orchestrator | 2026-01-02 03:01:14 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:01:17.413451 | orchestrator | 2026-01-02 03:01:17 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:01:17.414750 | orchestrator | 2026-01-02 03:01:17 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:01:17.414920 | orchestrator | 2026-01-02 03:01:17 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:01:20.462414 | orchestrator | 2026-01-02 03:01:20 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:01:20.463697 | orchestrator | 2026-01-02 03:01:20 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:01:20.464121 | orchestrator | 2026-01-02 03:01:20 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:01:23.511578 | orchestrator | 2026-01-02 03:01:23 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:01:23.513611 | orchestrator | 2026-01-02 03:01:23 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:01:23.513762 | orchestrator | 2026-01-02 03:01:23 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:01:26.570382 | orchestrator | 2026-01-02 03:01:26 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:01:26.572605 | orchestrator | 2026-01-02 03:01:26 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:01:26.572707 | orchestrator | 2026-01-02 03:01:26 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:01:29.620191 | orchestrator | 2026-01-02 03:01:29 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:01:29.621186 | orchestrator | 2026-01-02 03:01:29 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:01:29.621247 | orchestrator | 2026-01-02 03:01:29 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:01:32.668217 | orchestrator | 2026-01-02 03:01:32 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:01:32.669464 | orchestrator | 2026-01-02 03:01:32 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:01:32.669495 | orchestrator | 2026-01-02 03:01:32 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:01:35.714062 | orchestrator | 2026-01-02 03:01:35 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:01:35.717204 | orchestrator | 2026-01-02 03:01:35 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:01:35.717440 | orchestrator | 2026-01-02 03:01:35 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:01:38.766798 | orchestrator | 2026-01-02 03:01:38 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:01:38.768345 | orchestrator | 2026-01-02 03:01:38 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:01:38.768388 | orchestrator | 2026-01-02 03:01:38 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:01:41.820072 | orchestrator | 2026-01-02 03:01:41 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:01:41.823062 | orchestrator | 2026-01-02 03:01:41 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:01:41.823153 | orchestrator | 2026-01-02 03:01:41 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:01:44.867397 | orchestrator | 2026-01-02 03:01:44 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:01:44.870830 | orchestrator | 2026-01-02 03:01:44 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:01:44.870913 | orchestrator | 2026-01-02 03:01:44 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:01:47.918200 | orchestrator | 2026-01-02 03:01:47 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:01:47.920125 | orchestrator | 2026-01-02 03:01:47 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:01:47.920314 | orchestrator | 2026-01-02 03:01:47 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:01:50.966888 | orchestrator | 2026-01-02 03:01:50 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:01:50.967593 | orchestrator | 2026-01-02 03:01:50 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:01:50.967630 | orchestrator | 2026-01-02 03:01:50 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:01:54.011442 | orchestrator | 2026-01-02 03:01:54 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:01:54.013810 | orchestrator | 2026-01-02 03:01:54 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:01:54.013854 | orchestrator | 2026-01-02 03:01:54 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:01:57.057793 | orchestrator | 2026-01-02 03:01:57 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:01:57.058011 | orchestrator | 2026-01-02 03:01:57 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:01:57.058074 | orchestrator | 2026-01-02 03:01:57 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:02:00.101336 | orchestrator | 2026-01-02 03:02:00 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:02:00.101967 | orchestrator | 2026-01-02 03:02:00 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:02:00.101992 | orchestrator | 2026-01-02 03:02:00 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:02:03.148356 | orchestrator | 2026-01-02 03:02:03 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:02:03.149163 | orchestrator | 2026-01-02 03:02:03 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:02:03.149207 | orchestrator | 2026-01-02 03:02:03 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:02:06.196607 | orchestrator | 2026-01-02 03:02:06 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:02:06.198261 | orchestrator | 2026-01-02 03:02:06 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:02:06.198302 | orchestrator | 2026-01-02 03:02:06 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:02:09.243780 | orchestrator | 2026-01-02 03:02:09 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:02:09.245057 | orchestrator | 2026-01-02 03:02:09 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:02:09.245137 | orchestrator | 2026-01-02 03:02:09 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:02:12.296058 | orchestrator | 2026-01-02 03:02:12 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:02:12.297734 | orchestrator | 2026-01-02 03:02:12 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:02:12.297773 | orchestrator | 2026-01-02 03:02:12 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:02:15.345993 | orchestrator | 2026-01-02 03:02:15 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:02:15.347301 | orchestrator | 2026-01-02 03:02:15 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:02:15.347336 | orchestrator | 2026-01-02 03:02:15 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:02:18.390241 | orchestrator | 2026-01-02 03:02:18 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:02:18.391898 | orchestrator | 2026-01-02 03:02:18 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:02:18.391960 | orchestrator | 2026-01-02 03:02:18 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:02:21.438770 | orchestrator | 2026-01-02 03:02:21 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:02:21.439459 | orchestrator | 2026-01-02 03:02:21 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:02:21.439495 | orchestrator | 2026-01-02 03:02:21 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:02:24.490408 | orchestrator | 2026-01-02 03:02:24 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:02:24.491196 | orchestrator | 2026-01-02 03:02:24 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:02:24.491336 | orchestrator | 2026-01-02 03:02:24 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:02:27.540832 | orchestrator | 2026-01-02 03:02:27 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:02:27.541440 | orchestrator | 2026-01-02 03:02:27 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:02:27.541498 | orchestrator | 2026-01-02 03:02:27 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:02:30.586679 | orchestrator | 2026-01-02 03:02:30 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:02:30.587742 | orchestrator | 2026-01-02 03:02:30 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:02:30.587961 | orchestrator | 2026-01-02 03:02:30 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:02:33.637447 | orchestrator | 2026-01-02 03:02:33 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:02:33.639815 | orchestrator | 2026-01-02 03:02:33 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:02:33.639894 | orchestrator | 2026-01-02 03:02:33 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:02:36.682468 | orchestrator | 2026-01-02 03:02:36 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:02:36.684381 | orchestrator | 2026-01-02 03:02:36 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:02:36.684474 | orchestrator | 2026-01-02 03:02:36 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:02:39.726560 | orchestrator | 2026-01-02 03:02:39 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:02:39.727113 | orchestrator | 2026-01-02 03:02:39 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:02:39.727146 | orchestrator | 2026-01-02 03:02:39 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:02:42.775138 | orchestrator | 2026-01-02 03:02:42 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:02:42.776493 | orchestrator | 2026-01-02 03:02:42 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:02:42.776560 | orchestrator | 2026-01-02 03:02:42 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:02:45.826404 | orchestrator | 2026-01-02 03:02:45 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:02:45.826722 | orchestrator | 2026-01-02 03:02:45 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:02:45.826954 | orchestrator | 2026-01-02 03:02:45 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:02:48.871280 | orchestrator | 2026-01-02 03:02:48 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:02:48.872542 | orchestrator | 2026-01-02 03:02:48 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:02:48.872574 | orchestrator | 2026-01-02 03:02:48 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:02:51.918847 | orchestrator | 2026-01-02 03:02:51 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:02:51.920060 | orchestrator | 2026-01-02 03:02:51 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:02:51.920096 | orchestrator | 2026-01-02 03:02:51 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:02:54.969634 | orchestrator | 2026-01-02 03:02:54 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:02:54.970770 | orchestrator | 2026-01-02 03:02:54 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:02:54.970849 | orchestrator | 2026-01-02 03:02:54 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:02:58.025492 | orchestrator | 2026-01-02 03:02:58 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:02:58.026899 | orchestrator | 2026-01-02 03:02:58 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:02:58.026975 | orchestrator | 2026-01-02 03:02:58 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:03:01.064782 | orchestrator | 2026-01-02 03:03:01 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:03:01.066877 | orchestrator | 2026-01-02 03:03:01 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:03:01.066974 | orchestrator | 2026-01-02 03:03:01 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:03:04.107553 | orchestrator | 2026-01-02 03:03:04 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:03:04.109047 | orchestrator | 2026-01-02 03:03:04 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:03:04.109143 | orchestrator | 2026-01-02 03:03:04 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:03:07.150628 | orchestrator | 2026-01-02 03:03:07 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:03:07.152903 | orchestrator | 2026-01-02 03:03:07 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:03:07.152967 | orchestrator | 2026-01-02 03:03:07 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:03:10.197554 | orchestrator | 2026-01-02 03:03:10 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:03:10.197691 | orchestrator | 2026-01-02 03:03:10 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:03:10.197704 | orchestrator | 2026-01-02 03:03:10 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:03:13.248336 | orchestrator | 2026-01-02 03:03:13 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:03:13.250107 | orchestrator | 2026-01-02 03:03:13 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:03:13.250316 | orchestrator | 2026-01-02 03:03:13 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:03:16.302399 | orchestrator | 2026-01-02 03:03:16 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:03:16.303274 | orchestrator | 2026-01-02 03:03:16 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:03:16.303569 | orchestrator | 2026-01-02 03:03:16 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:03:19.356288 | orchestrator | 2026-01-02 03:03:19 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:03:19.358481 | orchestrator | 2026-01-02 03:03:19 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:03:19.358574 | orchestrator | 2026-01-02 03:03:19 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:03:22.404155 | orchestrator | 2026-01-02 03:03:22 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:03:22.407569 | orchestrator | 2026-01-02 03:03:22 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:03:22.407707 | orchestrator | 2026-01-02 03:03:22 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:03:25.455515 | orchestrator | 2026-01-02 03:03:25 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:03:25.458142 | orchestrator | 2026-01-02 03:03:25 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:03:25.458181 | orchestrator | 2026-01-02 03:03:25 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:03:28.505202 | orchestrator | 2026-01-02 03:03:28 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:03:28.507314 | orchestrator | 2026-01-02 03:03:28 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:03:28.507368 | orchestrator | 2026-01-02 03:03:28 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:03:31.553293 | orchestrator | 2026-01-02 03:03:31 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:03:31.554890 | orchestrator | 2026-01-02 03:03:31 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:03:31.555023 | orchestrator | 2026-01-02 03:03:31 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:03:34.600893 | orchestrator | 2026-01-02 03:03:34 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:03:34.602764 | orchestrator | 2026-01-02 03:03:34 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:03:34.603195 | orchestrator | 2026-01-02 03:03:34 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:03:37.655125 | orchestrator | 2026-01-02 03:03:37 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:03:37.655861 | orchestrator | 2026-01-02 03:03:37 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:03:37.656173 | orchestrator | 2026-01-02 03:03:37 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:03:40.702167 | orchestrator | 2026-01-02 03:03:40 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:03:40.703659 | orchestrator | 2026-01-02 03:03:40 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:03:40.703709 | orchestrator | 2026-01-02 03:03:40 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:03:43.747197 | orchestrator | 2026-01-02 03:03:43 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:03:43.747814 | orchestrator | 2026-01-02 03:03:43 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:03:43.747851 | orchestrator | 2026-01-02 03:03:43 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:03:46.795446 | orchestrator | 2026-01-02 03:03:46 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:03:46.795751 | orchestrator | 2026-01-02 03:03:46 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:03:46.795779 | orchestrator | 2026-01-02 03:03:46 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:03:49.840432 | orchestrator | 2026-01-02 03:03:49 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:03:49.841692 | orchestrator | 2026-01-02 03:03:49 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:03:49.841886 | orchestrator | 2026-01-02 03:03:49 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:03:52.884893 | orchestrator | 2026-01-02 03:03:52 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:03:52.886879 | orchestrator | 2026-01-02 03:03:52 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:03:52.887223 | orchestrator | 2026-01-02 03:03:52 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:03:55.931890 | orchestrator | 2026-01-02 03:03:55 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:03:55.932900 | orchestrator | 2026-01-02 03:03:55 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:03:55.932955 | orchestrator | 2026-01-02 03:03:55 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:03:58.980451 | orchestrator | 2026-01-02 03:03:58 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:03:58.982404 | orchestrator | 2026-01-02 03:03:58 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:03:58.982496 | orchestrator | 2026-01-02 03:03:58 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:04:02.029628 | orchestrator | 2026-01-02 03:04:02 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:04:02.030441 | orchestrator | 2026-01-02 03:04:02 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:04:02.030468 | orchestrator | 2026-01-02 03:04:02 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:04:05.076319 | orchestrator | 2026-01-02 03:04:05 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:04:05.077645 | orchestrator | 2026-01-02 03:04:05 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:04:05.077784 | orchestrator | 2026-01-02 03:04:05 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:04:08.125131 | orchestrator | 2026-01-02 03:04:08 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:04:08.126673 | orchestrator | 2026-01-02 03:04:08 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:04:08.126743 | orchestrator | 2026-01-02 03:04:08 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:04:11.177390 | orchestrator | 2026-01-02 03:04:11 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:04:11.179304 | orchestrator | 2026-01-02 03:04:11 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:04:11.179378 | orchestrator | 2026-01-02 03:04:11 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:04:14.229434 | orchestrator | 2026-01-02 03:04:14 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:04:14.230139 | orchestrator | 2026-01-02 03:04:14 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:04:14.230339 | orchestrator | 2026-01-02 03:04:14 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:04:17.279756 | orchestrator | 2026-01-02 03:04:17 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:04:17.281933 | orchestrator | 2026-01-02 03:04:17 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:04:17.281991 | orchestrator | 2026-01-02 03:04:17 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:04:20.328087 | orchestrator | 2026-01-02 03:04:20 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:04:20.329193 | orchestrator | 2026-01-02 03:04:20 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:04:20.329232 | orchestrator | 2026-01-02 03:04:20 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:04:23.373781 | orchestrator | 2026-01-02 03:04:23 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:04:23.375211 | orchestrator | 2026-01-02 03:04:23 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:04:23.375486 | orchestrator | 2026-01-02 03:04:23 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:04:26.413886 | orchestrator | 2026-01-02 03:04:26 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:04:26.415373 | orchestrator | 2026-01-02 03:04:26 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:04:26.415432 | orchestrator | 2026-01-02 03:04:26 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:04:29.465677 | orchestrator | 2026-01-02 03:04:29 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:04:29.467754 | orchestrator | 2026-01-02 03:04:29 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:04:29.467998 | orchestrator | 2026-01-02 03:04:29 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:04:32.515020 | orchestrator | 2026-01-02 03:04:32 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:04:32.516009 | orchestrator | 2026-01-02 03:04:32 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:04:32.516075 | orchestrator | 2026-01-02 03:04:32 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:04:35.565544 | orchestrator | 2026-01-02 03:04:35 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:04:35.566252 | orchestrator | 2026-01-02 03:04:35 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:04:35.566311 | orchestrator | 2026-01-02 03:04:35 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:04:38.615551 | orchestrator | 2026-01-02 03:04:38 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:04:38.617446 | orchestrator | 2026-01-02 03:04:38 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:04:38.617505 | orchestrator | 2026-01-02 03:04:38 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:04:41.660753 | orchestrator | 2026-01-02 03:04:41 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:04:41.662413 | orchestrator | 2026-01-02 03:04:41 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:04:41.662494 | orchestrator | 2026-01-02 03:04:41 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:04:44.704027 | orchestrator | 2026-01-02 03:04:44 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:04:44.706612 | orchestrator | 2026-01-02 03:04:44 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:04:44.706688 | orchestrator | 2026-01-02 03:04:44 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:04:47.752134 | orchestrator | 2026-01-02 03:04:47 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:04:47.753777 | orchestrator | 2026-01-02 03:04:47 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:04:47.753859 | orchestrator | 2026-01-02 03:04:47 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:04:50.795237 | orchestrator | 2026-01-02 03:04:50 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:04:50.796244 | orchestrator | 2026-01-02 03:04:50 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:04:50.796276 | orchestrator | 2026-01-02 03:04:50 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:04:53.845223 | orchestrator | 2026-01-02 03:04:53 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:04:53.846375 | orchestrator | 2026-01-02 03:04:53 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:04:53.846415 | orchestrator | 2026-01-02 03:04:53 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:04:56.895748 | orchestrator | 2026-01-02 03:04:56 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:04:56.897046 | orchestrator | 2026-01-02 03:04:56 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:04:56.897098 | orchestrator | 2026-01-02 03:04:56 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:04:59.939284 | orchestrator | 2026-01-02 03:04:59 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:04:59.941521 | orchestrator | 2026-01-02 03:04:59 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:04:59.941580 | orchestrator | 2026-01-02 03:04:59 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:05:02.990847 | orchestrator | 2026-01-02 03:05:02 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:05:02.993033 | orchestrator | 2026-01-02 03:05:02 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:05:02.993158 | orchestrator | 2026-01-02 03:05:02 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:05:06.044166 | orchestrator | 2026-01-02 03:05:06 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:05:06.045727 | orchestrator | 2026-01-02 03:05:06 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:05:06.046077 | orchestrator | 2026-01-02 03:05:06 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:05:09.090570 | orchestrator | 2026-01-02 03:05:09 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:05:09.092878 | orchestrator | 2026-01-02 03:05:09 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:05:09.093610 | orchestrator | 2026-01-02 03:05:09 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:05:12.142184 | orchestrator | 2026-01-02 03:05:12 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:05:12.143663 | orchestrator | 2026-01-02 03:05:12 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:05:12.143758 | orchestrator | 2026-01-02 03:05:12 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:05:15.188324 | orchestrator | 2026-01-02 03:05:15 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:05:15.189870 | orchestrator | 2026-01-02 03:05:15 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:05:15.190133 | orchestrator | 2026-01-02 03:05:15 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:05:18.234399 | orchestrator | 2026-01-02 03:05:18 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:05:18.236012 | orchestrator | 2026-01-02 03:05:18 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:05:18.236046 | orchestrator | 2026-01-02 03:05:18 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:05:21.280767 | orchestrator | 2026-01-02 03:05:21 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:05:21.281816 | orchestrator | 2026-01-02 03:05:21 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:05:21.281954 | orchestrator | 2026-01-02 03:05:21 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:05:24.326829 | orchestrator | 2026-01-02 03:05:24 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:05:24.328630 | orchestrator | 2026-01-02 03:05:24 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:05:24.328691 | orchestrator | 2026-01-02 03:05:24 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:05:27.373017 | orchestrator | 2026-01-02 03:05:27 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:05:27.373639 | orchestrator | 2026-01-02 03:05:27 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:05:27.373861 | orchestrator | 2026-01-02 03:05:27 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:05:30.419941 | orchestrator | 2026-01-02 03:05:30 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:05:30.420345 | orchestrator | 2026-01-02 03:05:30 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:05:30.420472 | orchestrator | 2026-01-02 03:05:30 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:05:33.464768 | orchestrator | 2026-01-02 03:05:33 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:05:33.465721 | orchestrator | 2026-01-02 03:05:33 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:05:33.465752 | orchestrator | 2026-01-02 03:05:33 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:05:36.509933 | orchestrator | 2026-01-02 03:05:36 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:05:36.511560 | orchestrator | 2026-01-02 03:05:36 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:05:36.511600 | orchestrator | 2026-01-02 03:05:36 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:05:39.557151 | orchestrator | 2026-01-02 03:05:39 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:05:39.559037 | orchestrator | 2026-01-02 03:05:39 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:05:39.559082 | orchestrator | 2026-01-02 03:05:39 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:05:42.604312 | orchestrator | 2026-01-02 03:05:42 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:05:42.606001 | orchestrator | 2026-01-02 03:05:42 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:05:42.606265 | orchestrator | 2026-01-02 03:05:42 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:05:45.652132 | orchestrator | 2026-01-02 03:05:45 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:05:45.653796 | orchestrator | 2026-01-02 03:05:45 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:05:45.653850 | orchestrator | 2026-01-02 03:05:45 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:05:48.699842 | orchestrator | 2026-01-02 03:05:48 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:05:48.702144 | orchestrator | 2026-01-02 03:05:48 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:05:48.702245 | orchestrator | 2026-01-02 03:05:48 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:05:51.752631 | orchestrator | 2026-01-02 03:05:51 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:05:51.754153 | orchestrator | 2026-01-02 03:05:51 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:05:51.754196 | orchestrator | 2026-01-02 03:05:51 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:05:54.800287 | orchestrator | 2026-01-02 03:05:54 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:05:54.801154 | orchestrator | 2026-01-02 03:05:54 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:05:54.801263 | orchestrator | 2026-01-02 03:05:54 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:05:57.847611 | orchestrator | 2026-01-02 03:05:57 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:05:57.848537 | orchestrator | 2026-01-02 03:05:57 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:05:57.848576 | orchestrator | 2026-01-02 03:05:57 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:06:00.882373 | orchestrator | 2026-01-02 03:06:00 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:06:00.883400 | orchestrator | 2026-01-02 03:06:00 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:06:00.883435 | orchestrator | 2026-01-02 03:06:00 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:06:03.931754 | orchestrator | 2026-01-02 03:06:03 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:06:03.932682 | orchestrator | 2026-01-02 03:06:03 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:06:03.932762 | orchestrator | 2026-01-02 03:06:03 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:06:06.979259 | orchestrator | 2026-01-02 03:06:06 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:06:06.980459 | orchestrator | 2026-01-02 03:06:06 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:06:06.980476 | orchestrator | 2026-01-02 03:06:06 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:06:10.035671 | orchestrator | 2026-01-02 03:06:10 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:06:10.037002 | orchestrator | 2026-01-02 03:06:10 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:06:10.037039 | orchestrator | 2026-01-02 03:06:10 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:06:13.078346 | orchestrator | 2026-01-02 03:06:13 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:06:13.080689 | orchestrator | 2026-01-02 03:06:13 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:06:13.080775 | orchestrator | 2026-01-02 03:06:13 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:06:16.128617 | orchestrator | 2026-01-02 03:06:16 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:06:16.129999 | orchestrator | 2026-01-02 03:06:16 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:06:16.130373 | orchestrator | 2026-01-02 03:06:16 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:06:19.179098 | orchestrator | 2026-01-02 03:06:19 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:06:19.180990 | orchestrator | 2026-01-02 03:06:19 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:06:19.181585 | orchestrator | 2026-01-02 03:06:19 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:06:22.231383 | orchestrator | 2026-01-02 03:06:22 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:06:22.232244 | orchestrator | 2026-01-02 03:06:22 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:06:22.232356 | orchestrator | 2026-01-02 03:06:22 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:06:25.280022 | orchestrator | 2026-01-02 03:06:25 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:06:25.281120 | orchestrator | 2026-01-02 03:06:25 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:06:25.281250 | orchestrator | 2026-01-02 03:06:25 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:06:28.326774 | orchestrator | 2026-01-02 03:06:28 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:06:28.327606 | orchestrator | 2026-01-02 03:06:28 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:06:28.327645 | orchestrator | 2026-01-02 03:06:28 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:06:31.371955 | orchestrator | 2026-01-02 03:06:31 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:06:31.372753 | orchestrator | 2026-01-02 03:06:31 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:06:31.372791 | orchestrator | 2026-01-02 03:06:31 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:06:34.418172 | orchestrator | 2026-01-02 03:06:34 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:06:34.419505 | orchestrator | 2026-01-02 03:06:34 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:06:34.419653 | orchestrator | 2026-01-02 03:06:34 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:06:37.465723 | orchestrator | 2026-01-02 03:06:37 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:06:37.468032 | orchestrator | 2026-01-02 03:06:37 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:06:37.468096 | orchestrator | 2026-01-02 03:06:37 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:06:40.510573 | orchestrator | 2026-01-02 03:06:40 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:06:40.511482 | orchestrator | 2026-01-02 03:06:40 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:06:40.511671 | orchestrator | 2026-01-02 03:06:40 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:06:43.557282 | orchestrator | 2026-01-02 03:06:43 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:06:43.558529 | orchestrator | 2026-01-02 03:06:43 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:06:43.558645 | orchestrator | 2026-01-02 03:06:43 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:06:46.601729 | orchestrator | 2026-01-02 03:06:46 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:06:46.603510 | orchestrator | 2026-01-02 03:06:46 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:06:46.603629 | orchestrator | 2026-01-02 03:06:46 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:06:49.647138 | orchestrator | 2026-01-02 03:06:49 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:06:49.648894 | orchestrator | 2026-01-02 03:06:49 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:06:49.648998 | orchestrator | 2026-01-02 03:06:49 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:06:52.696069 | orchestrator | 2026-01-02 03:06:52 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:06:52.697386 | orchestrator | 2026-01-02 03:06:52 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:06:52.697700 | orchestrator | 2026-01-02 03:06:52 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:06:55.747577 | orchestrator | 2026-01-02 03:06:55 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:06:55.749836 | orchestrator | 2026-01-02 03:06:55 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:06:55.749870 | orchestrator | 2026-01-02 03:06:55 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:06:58.799979 | orchestrator | 2026-01-02 03:06:58 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:06:58.800450 | orchestrator | 2026-01-02 03:06:58 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:06:58.800497 | orchestrator | 2026-01-02 03:06:58 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:07:01.841980 | orchestrator | 2026-01-02 03:07:01 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:07:01.843182 | orchestrator | 2026-01-02 03:07:01 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:07:01.843216 | orchestrator | 2026-01-02 03:07:01 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:07:04.891093 | orchestrator | 2026-01-02 03:07:04 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:07:04.893117 | orchestrator | 2026-01-02 03:07:04 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:07:04.893195 | orchestrator | 2026-01-02 03:07:04 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:07:07.939207 | orchestrator | 2026-01-02 03:07:07 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:07:07.941121 | orchestrator | 2026-01-02 03:07:07 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:07:07.941206 | orchestrator | 2026-01-02 03:07:07 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:07:10.986520 | orchestrator | 2026-01-02 03:07:10 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:07:10.988046 | orchestrator | 2026-01-02 03:07:10 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:07:10.988165 | orchestrator | 2026-01-02 03:07:10 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:07:14.040404 | orchestrator | 2026-01-02 03:07:14 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:07:14.041327 | orchestrator | 2026-01-02 03:07:14 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:07:14.041376 | orchestrator | 2026-01-02 03:07:14 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:07:17.085456 | orchestrator | 2026-01-02 03:07:17 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:07:17.086990 | orchestrator | 2026-01-02 03:07:17 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:07:17.087142 | orchestrator | 2026-01-02 03:07:17 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:07:20.128672 | orchestrator | 2026-01-02 03:07:20 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:07:20.130437 | orchestrator | 2026-01-02 03:07:20 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:07:20.130490 | orchestrator | 2026-01-02 03:07:20 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:07:23.177036 | orchestrator | 2026-01-02 03:07:23 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:07:23.178355 | orchestrator | 2026-01-02 03:07:23 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:07:23.178457 | orchestrator | 2026-01-02 03:07:23 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:07:26.229349 | orchestrator | 2026-01-02 03:07:26 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:07:26.231390 | orchestrator | 2026-01-02 03:07:26 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:07:26.231437 | orchestrator | 2026-01-02 03:07:26 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:07:29.277333 | orchestrator | 2026-01-02 03:07:29 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:07:29.278479 | orchestrator | 2026-01-02 03:07:29 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:07:29.278569 | orchestrator | 2026-01-02 03:07:29 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:07:32.326168 | orchestrator | 2026-01-02 03:07:32 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:07:32.328012 | orchestrator | 2026-01-02 03:07:32 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:07:32.328106 | orchestrator | 2026-01-02 03:07:32 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:07:35.375849 | orchestrator | 2026-01-02 03:07:35 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:07:35.377050 | orchestrator | 2026-01-02 03:07:35 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:07:35.377164 | orchestrator | 2026-01-02 03:07:35 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:07:38.426341 | orchestrator | 2026-01-02 03:07:38 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:07:38.427036 | orchestrator | 2026-01-02 03:07:38 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:07:38.427147 | orchestrator | 2026-01-02 03:07:38 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:07:41.476613 | orchestrator | 2026-01-02 03:07:41 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:07:41.478587 | orchestrator | 2026-01-02 03:07:41 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:07:41.478772 | orchestrator | 2026-01-02 03:07:41 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:07:44.527173 | orchestrator | 2026-01-02 03:07:44 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:07:44.527678 | orchestrator | 2026-01-02 03:07:44 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:07:44.528188 | orchestrator | 2026-01-02 03:07:44 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:07:47.571920 | orchestrator | 2026-01-02 03:07:47 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:07:47.575573 | orchestrator | 2026-01-02 03:07:47 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:07:47.575653 | orchestrator | 2026-01-02 03:07:47 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:07:50.633293 | orchestrator | 2026-01-02 03:07:50 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:07:50.635279 | orchestrator | 2026-01-02 03:07:50 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:07:50.635320 | orchestrator | 2026-01-02 03:07:50 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:07:53.680731 | orchestrator | 2026-01-02 03:07:53 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:07:53.682323 | orchestrator | 2026-01-02 03:07:53 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:07:53.682469 | orchestrator | 2026-01-02 03:07:53 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:07:56.729015 | orchestrator | 2026-01-02 03:07:56 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:07:56.732533 | orchestrator | 2026-01-02 03:07:56 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:07:56.732710 | orchestrator | 2026-01-02 03:07:56 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:07:59.772096 | orchestrator | 2026-01-02 03:07:59 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:07:59.773426 | orchestrator | 2026-01-02 03:07:59 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:07:59.773704 | orchestrator | 2026-01-02 03:07:59 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:08:02.818127 | orchestrator | 2026-01-02 03:08:02 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:08:02.819472 | orchestrator | 2026-01-02 03:08:02 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:08:02.819519 | orchestrator | 2026-01-02 03:08:02 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:08:05.868783 | orchestrator | 2026-01-02 03:08:05 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:08:05.870362 | orchestrator | 2026-01-02 03:08:05 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:08:05.870423 | orchestrator | 2026-01-02 03:08:05 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:08:08.915324 | orchestrator | 2026-01-02 03:08:08 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:08:08.916833 | orchestrator | 2026-01-02 03:08:08 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:08:08.916994 | orchestrator | 2026-01-02 03:08:08 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:08:11.964000 | orchestrator | 2026-01-02 03:08:11 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:08:11.965074 | orchestrator | 2026-01-02 03:08:11 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:08:11.965109 | orchestrator | 2026-01-02 03:08:11 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:08:15.012686 | orchestrator | 2026-01-02 03:08:15 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:08:15.013530 | orchestrator | 2026-01-02 03:08:15 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:08:15.013622 | orchestrator | 2026-01-02 03:08:15 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:08:18.061254 | orchestrator | 2026-01-02 03:08:18 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:08:18.062794 | orchestrator | 2026-01-02 03:08:18 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:08:18.062834 | orchestrator | 2026-01-02 03:08:18 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:08:21.109478 | orchestrator | 2026-01-02 03:08:21 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:08:21.111160 | orchestrator | 2026-01-02 03:08:21 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:08:21.111237 | orchestrator | 2026-01-02 03:08:21 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:08:24.152132 | orchestrator | 2026-01-02 03:08:24 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:08:24.153741 | orchestrator | 2026-01-02 03:08:24 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:08:24.153873 | orchestrator | 2026-01-02 03:08:24 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:08:27.201165 | orchestrator | 2026-01-02 03:08:27 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:08:27.202736 | orchestrator | 2026-01-02 03:08:27 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:08:27.203011 | orchestrator | 2026-01-02 03:08:27 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:08:30.253595 | orchestrator | 2026-01-02 03:08:30 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:08:30.254305 | orchestrator | 2026-01-02 03:08:30 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:08:30.254350 | orchestrator | 2026-01-02 03:08:30 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:08:33.299315 | orchestrator | 2026-01-02 03:08:33 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:08:33.299774 | orchestrator | 2026-01-02 03:08:33 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:08:33.300072 | orchestrator | 2026-01-02 03:08:33 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:08:36.348132 | orchestrator | 2026-01-02 03:08:36 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:08:36.349423 | orchestrator | 2026-01-02 03:08:36 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:08:36.349457 | orchestrator | 2026-01-02 03:08:36 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:08:39.390693 | orchestrator | 2026-01-02 03:08:39 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:08:39.392552 | orchestrator | 2026-01-02 03:08:39 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:08:39.392582 | orchestrator | 2026-01-02 03:08:39 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:08:42.438323 | orchestrator | 2026-01-02 03:08:42 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:08:42.439184 | orchestrator | 2026-01-02 03:08:42 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:08:42.439268 | orchestrator | 2026-01-02 03:08:42 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:08:45.484969 | orchestrator | 2026-01-02 03:08:45 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:08:45.486316 | orchestrator | 2026-01-02 03:08:45 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:08:45.486373 | orchestrator | 2026-01-02 03:08:45 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:08:48.528318 | orchestrator | 2026-01-02 03:08:48 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:08:48.529679 | orchestrator | 2026-01-02 03:08:48 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:08:48.529715 | orchestrator | 2026-01-02 03:08:48 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:08:51.570290 | orchestrator | 2026-01-02 03:08:51 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:08:51.572623 | orchestrator | 2026-01-02 03:08:51 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:08:51.572864 | orchestrator | 2026-01-02 03:08:51 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:08:54.613079 | orchestrator | 2026-01-02 03:08:54 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:08:54.614213 | orchestrator | 2026-01-02 03:08:54 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:08:54.614253 | orchestrator | 2026-01-02 03:08:54 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:08:57.664447 | orchestrator | 2026-01-02 03:08:57 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:08:57.666070 | orchestrator | 2026-01-02 03:08:57 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:08:57.666093 | orchestrator | 2026-01-02 03:08:57 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:09:00.713691 | orchestrator | 2026-01-02 03:09:00 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:09:00.714280 | orchestrator | 2026-01-02 03:09:00 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:09:00.714401 | orchestrator | 2026-01-02 03:09:00 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:09:03.763831 | orchestrator | 2026-01-02 03:09:03 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:09:03.765936 | orchestrator | 2026-01-02 03:09:03 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:09:03.765977 | orchestrator | 2026-01-02 03:09:03 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:09:06.816537 | orchestrator | 2026-01-02 03:09:06 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:09:06.818967 | orchestrator | 2026-01-02 03:09:06 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:09:06.819044 | orchestrator | 2026-01-02 03:09:06 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:09:09.866216 | orchestrator | 2026-01-02 03:09:09 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:09:09.867885 | orchestrator | 2026-01-02 03:09:09 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:09:09.867985 | orchestrator | 2026-01-02 03:09:09 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:09:12.918829 | orchestrator | 2026-01-02 03:09:12 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:09:12.919845 | orchestrator | 2026-01-02 03:09:12 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:09:12.919931 | orchestrator | 2026-01-02 03:09:12 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:09:15.965289 | orchestrator | 2026-01-02 03:09:15 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:09:15.967990 | orchestrator | 2026-01-02 03:09:15 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:09:15.968073 | orchestrator | 2026-01-02 03:09:15 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:09:19.013378 | orchestrator | 2026-01-02 03:09:19 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:09:19.014683 | orchestrator | 2026-01-02 03:09:19 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:09:19.014724 | orchestrator | 2026-01-02 03:09:19 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:09:22.061288 | orchestrator | 2026-01-02 03:09:22 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:09:22.062752 | orchestrator | 2026-01-02 03:09:22 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:09:22.062827 | orchestrator | 2026-01-02 03:09:22 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:09:25.115091 | orchestrator | 2026-01-02 03:09:25 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:09:25.117713 | orchestrator | 2026-01-02 03:09:25 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:09:25.117861 | orchestrator | 2026-01-02 03:09:25 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:09:28.159639 | orchestrator | 2026-01-02 03:09:28 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:09:28.161536 | orchestrator | 2026-01-02 03:09:28 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:09:28.161581 | orchestrator | 2026-01-02 03:09:28 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:09:31.213880 | orchestrator | 2026-01-02 03:09:31 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:09:31.215498 | orchestrator | 2026-01-02 03:09:31 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:09:31.216204 | orchestrator | 2026-01-02 03:09:31 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:09:34.264372 | orchestrator | 2026-01-02 03:09:34 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:09:34.265707 | orchestrator | 2026-01-02 03:09:34 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:09:34.265777 | orchestrator | 2026-01-02 03:09:34 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:09:37.311657 | orchestrator | 2026-01-02 03:09:37 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:09:37.311933 | orchestrator | 2026-01-02 03:09:37 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:09:37.312009 | orchestrator | 2026-01-02 03:09:37 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:09:40.368560 | orchestrator | 2026-01-02 03:09:40 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:09:40.370153 | orchestrator | 2026-01-02 03:09:40 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:09:40.370266 | orchestrator | 2026-01-02 03:09:40 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:09:43.413515 | orchestrator | 2026-01-02 03:09:43 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:09:43.414853 | orchestrator | 2026-01-02 03:09:43 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:09:43.415006 | orchestrator | 2026-01-02 03:09:43 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:09:46.463360 | orchestrator | 2026-01-02 03:09:46 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:09:46.465564 | orchestrator | 2026-01-02 03:09:46 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:09:46.465592 | orchestrator | 2026-01-02 03:09:46 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:09:49.514767 | orchestrator | 2026-01-02 03:09:49 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:09:49.516564 | orchestrator | 2026-01-02 03:09:49 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:09:49.516705 | orchestrator | 2026-01-02 03:09:49 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:09:52.557724 | orchestrator | 2026-01-02 03:09:52 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:09:52.560247 | orchestrator | 2026-01-02 03:09:52 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:09:52.560406 | orchestrator | 2026-01-02 03:09:52 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:09:55.602442 | orchestrator | 2026-01-02 03:09:55 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:09:55.603522 | orchestrator | 2026-01-02 03:09:55 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:09:55.603560 | orchestrator | 2026-01-02 03:09:55 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:09:58.650941 | orchestrator | 2026-01-02 03:09:58 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:09:58.652256 | orchestrator | 2026-01-02 03:09:58 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:09:58.652348 | orchestrator | 2026-01-02 03:09:58 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:10:01.691525 | orchestrator | 2026-01-02 03:10:01 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:10:01.693343 | orchestrator | 2026-01-02 03:10:01 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:10:01.693420 | orchestrator | 2026-01-02 03:10:01 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:10:04.740777 | orchestrator | 2026-01-02 03:10:04 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:10:04.742362 | orchestrator | 2026-01-02 03:10:04 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:10:04.742477 | orchestrator | 2026-01-02 03:10:04 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:10:07.787212 | orchestrator | 2026-01-02 03:10:07 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:10:07.788961 | orchestrator | 2026-01-02 03:10:07 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:10:07.789111 | orchestrator | 2026-01-02 03:10:07 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:10:10.836136 | orchestrator | 2026-01-02 03:10:10 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:10:10.838195 | orchestrator | 2026-01-02 03:10:10 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:10:10.838258 | orchestrator | 2026-01-02 03:10:10 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:10:13.878990 | orchestrator | 2026-01-02 03:10:13 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:10:13.880461 | orchestrator | 2026-01-02 03:10:13 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:10:13.880490 | orchestrator | 2026-01-02 03:10:13 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:10:16.923696 | orchestrator | 2026-01-02 03:10:16 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:10:16.924974 | orchestrator | 2026-01-02 03:10:16 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:10:16.925007 | orchestrator | 2026-01-02 03:10:16 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:10:19.967054 | orchestrator | 2026-01-02 03:10:19 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:10:19.969184 | orchestrator | 2026-01-02 03:10:19 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:10:19.969245 | orchestrator | 2026-01-02 03:10:19 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:10:23.020990 | orchestrator | 2026-01-02 03:10:23 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:10:23.023958 | orchestrator | 2026-01-02 03:10:23 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:10:23.024288 | orchestrator | 2026-01-02 03:10:23 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:10:26.072310 | orchestrator | 2026-01-02 03:10:26 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:10:26.073780 | orchestrator | 2026-01-02 03:10:26 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:10:26.073825 | orchestrator | 2026-01-02 03:10:26 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:10:29.119195 | orchestrator | 2026-01-02 03:10:29 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:10:29.121108 | orchestrator | 2026-01-02 03:10:29 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:10:29.121257 | orchestrator | 2026-01-02 03:10:29 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:10:32.168590 | orchestrator | 2026-01-02 03:10:32 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:10:32.168854 | orchestrator | 2026-01-02 03:10:32 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:10:32.169031 | orchestrator | 2026-01-02 03:10:32 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:10:35.218473 | orchestrator | 2026-01-02 03:10:35 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:10:35.218676 | orchestrator | 2026-01-02 03:10:35 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:10:35.218700 | orchestrator | 2026-01-02 03:10:35 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:10:38.264738 | orchestrator | 2026-01-02 03:10:38 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:10:38.266687 | orchestrator | 2026-01-02 03:10:38 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:10:38.266855 | orchestrator | 2026-01-02 03:10:38 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:10:41.309948 | orchestrator | 2026-01-02 03:10:41 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:10:41.311322 | orchestrator | 2026-01-02 03:10:41 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:10:41.311380 | orchestrator | 2026-01-02 03:10:41 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:10:44.358392 | orchestrator | 2026-01-02 03:10:44 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:10:44.361454 | orchestrator | 2026-01-02 03:10:44 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:10:44.361611 | orchestrator | 2026-01-02 03:10:44 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:10:47.404610 | orchestrator | 2026-01-02 03:10:47 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:10:47.406520 | orchestrator | 2026-01-02 03:10:47 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:10:47.406597 | orchestrator | 2026-01-02 03:10:47 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:10:50.455635 | orchestrator | 2026-01-02 03:10:50 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:10:50.456783 | orchestrator | 2026-01-02 03:10:50 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:10:50.456813 | orchestrator | 2026-01-02 03:10:50 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:10:53.500415 | orchestrator | 2026-01-02 03:10:53 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:10:53.502679 | orchestrator | 2026-01-02 03:10:53 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:10:53.502738 | orchestrator | 2026-01-02 03:10:53 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:10:56.551620 | orchestrator | 2026-01-02 03:10:56 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:10:56.552121 | orchestrator | 2026-01-02 03:10:56 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:10:56.552163 | orchestrator | 2026-01-02 03:10:56 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:10:59.593301 | orchestrator | 2026-01-02 03:10:59 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:10:59.594006 | orchestrator | 2026-01-02 03:10:59 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:10:59.594114 | orchestrator | 2026-01-02 03:10:59 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:11:02.642564 | orchestrator | 2026-01-02 03:11:02 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:11:02.644755 | orchestrator | 2026-01-02 03:11:02 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:11:02.644960 | orchestrator | 2026-01-02 03:11:02 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:11:05.694379 | orchestrator | 2026-01-02 03:11:05 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:11:05.694719 | orchestrator | 2026-01-02 03:11:05 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:11:05.694756 | orchestrator | 2026-01-02 03:11:05 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:11:08.743054 | orchestrator | 2026-01-02 03:11:08 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:11:08.745260 | orchestrator | 2026-01-02 03:11:08 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:11:08.745582 | orchestrator | 2026-01-02 03:11:08 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:11:11.787234 | orchestrator | 2026-01-02 03:11:11 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:11:11.789280 | orchestrator | 2026-01-02 03:11:11 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:11:11.789316 | orchestrator | 2026-01-02 03:11:11 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:11:14.835414 | orchestrator | 2026-01-02 03:11:14 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:11:14.837241 | orchestrator | 2026-01-02 03:11:14 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:11:14.837323 | orchestrator | 2026-01-02 03:11:14 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:11:17.883025 | orchestrator | 2026-01-02 03:11:17 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:11:17.885427 | orchestrator | 2026-01-02 03:11:17 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:11:17.885476 | orchestrator | 2026-01-02 03:11:17 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:11:20.932842 | orchestrator | 2026-01-02 03:11:20 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:11:20.934528 | orchestrator | 2026-01-02 03:11:20 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:11:20.935219 | orchestrator | 2026-01-02 03:11:20 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:11:23.985045 | orchestrator | 2026-01-02 03:11:23 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:11:23.986826 | orchestrator | 2026-01-02 03:11:23 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:11:23.987489 | orchestrator | 2026-01-02 03:11:23 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:11:27.030990 | orchestrator | 2026-01-02 03:11:27 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:11:27.034160 | orchestrator | 2026-01-02 03:11:27 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:11:27.034362 | orchestrator | 2026-01-02 03:11:27 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:11:30.080221 | orchestrator | 2026-01-02 03:11:30 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:11:30.081295 | orchestrator | 2026-01-02 03:11:30 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:11:30.081334 | orchestrator | 2026-01-02 03:11:30 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:11:33.126530 | orchestrator | 2026-01-02 03:11:33 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:11:33.128230 | orchestrator | 2026-01-02 03:11:33 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:11:33.128256 | orchestrator | 2026-01-02 03:11:33 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:11:36.174835 | orchestrator | 2026-01-02 03:11:36 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:11:36.176254 | orchestrator | 2026-01-02 03:11:36 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:11:36.176287 | orchestrator | 2026-01-02 03:11:36 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:11:39.218820 | orchestrator | 2026-01-02 03:11:39 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:11:39.219878 | orchestrator | 2026-01-02 03:11:39 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:11:39.220010 | orchestrator | 2026-01-02 03:11:39 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:11:42.261460 | orchestrator | 2026-01-02 03:11:42 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:11:42.263407 | orchestrator | 2026-01-02 03:11:42 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:11:42.263449 | orchestrator | 2026-01-02 03:11:42 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:11:45.303463 | orchestrator | 2026-01-02 03:11:45 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:11:45.305868 | orchestrator | 2026-01-02 03:11:45 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:11:45.306106 | orchestrator | 2026-01-02 03:11:45 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:11:48.355372 | orchestrator | 2026-01-02 03:11:48 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:11:48.357082 | orchestrator | 2026-01-02 03:11:48 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:11:48.357133 | orchestrator | 2026-01-02 03:11:48 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:11:51.404073 | orchestrator | 2026-01-02 03:11:51 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:11:51.405463 | orchestrator | 2026-01-02 03:11:51 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:11:51.405502 | orchestrator | 2026-01-02 03:11:51 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:11:54.451134 | orchestrator | 2026-01-02 03:11:54 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:11:54.454993 | orchestrator | 2026-01-02 03:11:54 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:11:54.455070 | orchestrator | 2026-01-02 03:11:54 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:11:57.502441 | orchestrator | 2026-01-02 03:11:57 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:11:57.502541 | orchestrator | 2026-01-02 03:11:57 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:11:57.502559 | orchestrator | 2026-01-02 03:11:57 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:12:00.548386 | orchestrator | 2026-01-02 03:12:00 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:12:00.550272 | orchestrator | 2026-01-02 03:12:00 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:12:00.550321 | orchestrator | 2026-01-02 03:12:00 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:12:03.601203 | orchestrator | 2026-01-02 03:12:03 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:12:03.603066 | orchestrator | 2026-01-02 03:12:03 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:12:03.603319 | orchestrator | 2026-01-02 03:12:03 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:12:06.652360 | orchestrator | 2026-01-02 03:12:06 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:12:06.652843 | orchestrator | 2026-01-02 03:12:06 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:12:06.653221 | orchestrator | 2026-01-02 03:12:06 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:12:09.704481 | orchestrator | 2026-01-02 03:12:09 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:12:09.705650 | orchestrator | 2026-01-02 03:12:09 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:12:09.705990 | orchestrator | 2026-01-02 03:12:09 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:12:12.757420 | orchestrator | 2026-01-02 03:12:12 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:12:12.761572 | orchestrator | 2026-01-02 03:12:12 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:12:12.761637 | orchestrator | 2026-01-02 03:12:12 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:12:15.810360 | orchestrator | 2026-01-02 03:12:15 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:12:15.811361 | orchestrator | 2026-01-02 03:12:15 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:12:15.811389 | orchestrator | 2026-01-02 03:12:15 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:12:18.856330 | orchestrator | 2026-01-02 03:12:18 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:12:18.857830 | orchestrator | 2026-01-02 03:12:18 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:12:18.858090 | orchestrator | 2026-01-02 03:12:18 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:12:21.902585 | orchestrator | 2026-01-02 03:12:21 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:12:21.903779 | orchestrator | 2026-01-02 03:12:21 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:12:21.903855 | orchestrator | 2026-01-02 03:12:21 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:12:24.948790 | orchestrator | 2026-01-02 03:12:24 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:12:24.949626 | orchestrator | 2026-01-02 03:12:24 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:12:24.949662 | orchestrator | 2026-01-02 03:12:24 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:12:27.993154 | orchestrator | 2026-01-02 03:12:27 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:12:27.993686 | orchestrator | 2026-01-02 03:12:27 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:12:27.993778 | orchestrator | 2026-01-02 03:12:27 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:12:31.056785 | orchestrator | 2026-01-02 03:12:31 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:12:31.057286 | orchestrator | 2026-01-02 03:12:31 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:12:31.057333 | orchestrator | 2026-01-02 03:12:31 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:12:34.100608 | orchestrator | 2026-01-02 03:12:34 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:12:34.101778 | orchestrator | 2026-01-02 03:12:34 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:12:34.101969 | orchestrator | 2026-01-02 03:12:34 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:12:37.148961 | orchestrator | 2026-01-02 03:12:37 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:12:37.150405 | orchestrator | 2026-01-02 03:12:37 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:12:37.150443 | orchestrator | 2026-01-02 03:12:37 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:12:40.199349 | orchestrator | 2026-01-02 03:12:40 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:12:40.201205 | orchestrator | 2026-01-02 03:12:40 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:12:40.201316 | orchestrator | 2026-01-02 03:12:40 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:12:43.247995 | orchestrator | 2026-01-02 03:12:43 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:12:43.249168 | orchestrator | 2026-01-02 03:12:43 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:12:43.249183 | orchestrator | 2026-01-02 03:12:43 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:12:46.293433 | orchestrator | 2026-01-02 03:12:46 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:12:46.295091 | orchestrator | 2026-01-02 03:12:46 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:12:46.295130 | orchestrator | 2026-01-02 03:12:46 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:12:49.340984 | orchestrator | 2026-01-02 03:12:49 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:12:49.343185 | orchestrator | 2026-01-02 03:12:49 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:12:49.343461 | orchestrator | 2026-01-02 03:12:49 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:12:52.388583 | orchestrator | 2026-01-02 03:12:52 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:12:52.390471 | orchestrator | 2026-01-02 03:12:52 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:12:52.390630 | orchestrator | 2026-01-02 03:12:52 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:12:55.435858 | orchestrator | 2026-01-02 03:12:55 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:12:55.437662 | orchestrator | 2026-01-02 03:12:55 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:12:55.437753 | orchestrator | 2026-01-02 03:12:55 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:12:58.486111 | orchestrator | 2026-01-02 03:12:58 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:12:58.486904 | orchestrator | 2026-01-02 03:12:58 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:12:58.487016 | orchestrator | 2026-01-02 03:12:58 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:13:01.525496 | orchestrator | 2026-01-02 03:13:01 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:13:01.526542 | orchestrator | 2026-01-02 03:13:01 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:13:01.526705 | orchestrator | 2026-01-02 03:13:01 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:13:04.576061 | orchestrator | 2026-01-02 03:13:04 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:13:04.577396 | orchestrator | 2026-01-02 03:13:04 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:13:04.577422 | orchestrator | 2026-01-02 03:13:04 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:13:07.623166 | orchestrator | 2026-01-02 03:13:07 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:13:07.624226 | orchestrator | 2026-01-02 03:13:07 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:13:07.624339 | orchestrator | 2026-01-02 03:13:07 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:13:10.675686 | orchestrator | 2026-01-02 03:13:10 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:13:10.677944 | orchestrator | 2026-01-02 03:13:10 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:13:10.678088 | orchestrator | 2026-01-02 03:13:10 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:13:13.727376 | orchestrator | 2026-01-02 03:13:13 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:13:13.728909 | orchestrator | 2026-01-02 03:13:13 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:13:13.728961 | orchestrator | 2026-01-02 03:13:13 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:13:16.772720 | orchestrator | 2026-01-02 03:13:16 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:13:16.774545 | orchestrator | 2026-01-02 03:13:16 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:13:16.774616 | orchestrator | 2026-01-02 03:13:16 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:13:19.823668 | orchestrator | 2026-01-02 03:13:19 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:13:19.825652 | orchestrator | 2026-01-02 03:13:19 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:13:19.825684 | orchestrator | 2026-01-02 03:13:19 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:13:22.869295 | orchestrator | 2026-01-02 03:13:22 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:13:22.870610 | orchestrator | 2026-01-02 03:13:22 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:13:22.870690 | orchestrator | 2026-01-02 03:13:22 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:13:25.913595 | orchestrator | 2026-01-02 03:13:25 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:13:25.914990 | orchestrator | 2026-01-02 03:13:25 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:13:25.915251 | orchestrator | 2026-01-02 03:13:25 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:13:28.961768 | orchestrator | 2026-01-02 03:13:28 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:13:28.962955 | orchestrator | 2026-01-02 03:13:28 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:13:28.963272 | orchestrator | 2026-01-02 03:13:28 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:13:32.010298 | orchestrator | 2026-01-02 03:13:32 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:13:32.013953 | orchestrator | 2026-01-02 03:13:32 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:13:32.014010 | orchestrator | 2026-01-02 03:13:32 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:13:35.054325 | orchestrator | 2026-01-02 03:13:35 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:13:35.054563 | orchestrator | 2026-01-02 03:13:35 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:13:35.054619 | orchestrator | 2026-01-02 03:13:35 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:13:38.096516 | orchestrator | 2026-01-02 03:13:38 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:13:38.097737 | orchestrator | 2026-01-02 03:13:38 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:13:38.098065 | orchestrator | 2026-01-02 03:13:38 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:13:41.146399 | orchestrator | 2026-01-02 03:13:41 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:13:41.148524 | orchestrator | 2026-01-02 03:13:41 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:13:41.148734 | orchestrator | 2026-01-02 03:13:41 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:13:44.198248 | orchestrator | 2026-01-02 03:13:44 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:13:44.200898 | orchestrator | 2026-01-02 03:13:44 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:13:44.200936 | orchestrator | 2026-01-02 03:13:44 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:13:47.254220 | orchestrator | 2026-01-02 03:13:47 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:13:47.256729 | orchestrator | 2026-01-02 03:13:47 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:13:47.256827 | orchestrator | 2026-01-02 03:13:47 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:13:50.307293 | orchestrator | 2026-01-02 03:13:50 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:13:50.307452 | orchestrator | 2026-01-02 03:13:50 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:13:50.307469 | orchestrator | 2026-01-02 03:13:50 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:13:53.357914 | orchestrator | 2026-01-02 03:13:53 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:13:53.359502 | orchestrator | 2026-01-02 03:13:53 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:13:53.359558 | orchestrator | 2026-01-02 03:13:53 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:13:56.404361 | orchestrator | 2026-01-02 03:13:56 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:13:56.405380 | orchestrator | 2026-01-02 03:13:56 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:13:56.405401 | orchestrator | 2026-01-02 03:13:56 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:13:59.453942 | orchestrator | 2026-01-02 03:13:59 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:13:59.455876 | orchestrator | 2026-01-02 03:13:59 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:13:59.455918 | orchestrator | 2026-01-02 03:13:59 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:14:02.498646 | orchestrator | 2026-01-02 03:14:02 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:14:02.500546 | orchestrator | 2026-01-02 03:14:02 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:14:02.500611 | orchestrator | 2026-01-02 03:14:02 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:14:05.545466 | orchestrator | 2026-01-02 03:14:05 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:14:05.546128 | orchestrator | 2026-01-02 03:14:05 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:14:05.546171 | orchestrator | 2026-01-02 03:14:05 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:14:08.590692 | orchestrator | 2026-01-02 03:14:08 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:14:08.591314 | orchestrator | 2026-01-02 03:14:08 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:14:08.591349 | orchestrator | 2026-01-02 03:14:08 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:14:11.635160 | orchestrator | 2026-01-02 03:14:11 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:14:11.636271 | orchestrator | 2026-01-02 03:14:11 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:14:11.636389 | orchestrator | 2026-01-02 03:14:11 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:14:14.683375 | orchestrator | 2026-01-02 03:14:14 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:14:14.683968 | orchestrator | 2026-01-02 03:14:14 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:14:14.684093 | orchestrator | 2026-01-02 03:14:14 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:14:17.729192 | orchestrator | 2026-01-02 03:14:17 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:14:17.730407 | orchestrator | 2026-01-02 03:14:17 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:14:17.730465 | orchestrator | 2026-01-02 03:14:17 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:14:20.778987 | orchestrator | 2026-01-02 03:14:20 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:14:20.781147 | orchestrator | 2026-01-02 03:14:20 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:14:20.781197 | orchestrator | 2026-01-02 03:14:20 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:14:23.825835 | orchestrator | 2026-01-02 03:14:23 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:14:23.827557 | orchestrator | 2026-01-02 03:14:23 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:14:23.827605 | orchestrator | 2026-01-02 03:14:23 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:14:26.870446 | orchestrator | 2026-01-02 03:14:26 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:14:26.871364 | orchestrator | 2026-01-02 03:14:26 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:14:26.871456 | orchestrator | 2026-01-02 03:14:26 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:14:29.924994 | orchestrator | 2026-01-02 03:14:29 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:14:29.925853 | orchestrator | 2026-01-02 03:14:29 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:14:29.926073 | orchestrator | 2026-01-02 03:14:29 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:14:32.964477 | orchestrator | 2026-01-02 03:14:32 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:14:32.966540 | orchestrator | 2026-01-02 03:14:32 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:14:32.966586 | orchestrator | 2026-01-02 03:14:32 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:14:36.011571 | orchestrator | 2026-01-02 03:14:36 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:14:36.013319 | orchestrator | 2026-01-02 03:14:36 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:14:36.013365 | orchestrator | 2026-01-02 03:14:36 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:14:39.050469 | orchestrator | 2026-01-02 03:14:39 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:14:39.053072 | orchestrator | 2026-01-02 03:14:39 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:14:39.053209 | orchestrator | 2026-01-02 03:14:39 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:14:42.093461 | orchestrator | 2026-01-02 03:14:42 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:14:42.094646 | orchestrator | 2026-01-02 03:14:42 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:14:42.094688 | orchestrator | 2026-01-02 03:14:42 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:14:45.138073 | orchestrator | 2026-01-02 03:14:45 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:14:45.139237 | orchestrator | 2026-01-02 03:14:45 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:14:45.139497 | orchestrator | 2026-01-02 03:14:45 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:14:48.182306 | orchestrator | 2026-01-02 03:14:48 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:14:48.183365 | orchestrator | 2026-01-02 03:14:48 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:14:48.183398 | orchestrator | 2026-01-02 03:14:48 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:14:51.230601 | orchestrator | 2026-01-02 03:14:51 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:14:51.232812 | orchestrator | 2026-01-02 03:14:51 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:14:51.232962 | orchestrator | 2026-01-02 03:14:51 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:14:54.276107 | orchestrator | 2026-01-02 03:14:54 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:14:54.278090 | orchestrator | 2026-01-02 03:14:54 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:14:54.278138 | orchestrator | 2026-01-02 03:14:54 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:14:57.321481 | orchestrator | 2026-01-02 03:14:57 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:14:57.322252 | orchestrator | 2026-01-02 03:14:57 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:14:57.322293 | orchestrator | 2026-01-02 03:14:57 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:15:00.374284 | orchestrator | 2026-01-02 03:15:00 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:15:00.375648 | orchestrator | 2026-01-02 03:15:00 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:15:00.375686 | orchestrator | 2026-01-02 03:15:00 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:15:03.427185 | orchestrator | 2026-01-02 03:15:03 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:15:03.428557 | orchestrator | 2026-01-02 03:15:03 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:15:03.428779 | orchestrator | 2026-01-02 03:15:03 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:15:06.475826 | orchestrator | 2026-01-02 03:15:06 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:15:06.477730 | orchestrator | 2026-01-02 03:15:06 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:15:06.477800 | orchestrator | 2026-01-02 03:15:06 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:15:09.522531 | orchestrator | 2026-01-02 03:15:09 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:15:09.523428 | orchestrator | 2026-01-02 03:15:09 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:15:09.523550 | orchestrator | 2026-01-02 03:15:09 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:15:12.574442 | orchestrator | 2026-01-02 03:15:12 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:15:12.576328 | orchestrator | 2026-01-02 03:15:12 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:15:12.576521 | orchestrator | 2026-01-02 03:15:12 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:15:15.623341 | orchestrator | 2026-01-02 03:15:15 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:15:15.625130 | orchestrator | 2026-01-02 03:15:15 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:15:15.625168 | orchestrator | 2026-01-02 03:15:15 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:15:18.670204 | orchestrator | 2026-01-02 03:15:18 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:15:18.671260 | orchestrator | 2026-01-02 03:15:18 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:15:18.671293 | orchestrator | 2026-01-02 03:15:18 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:15:21.720521 | orchestrator | 2026-01-02 03:15:21 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:15:21.722071 | orchestrator | 2026-01-02 03:15:21 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:15:21.722338 | orchestrator | 2026-01-02 03:15:21 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:15:24.772050 | orchestrator | 2026-01-02 03:15:24 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:15:24.774127 | orchestrator | 2026-01-02 03:15:24 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:15:24.774189 | orchestrator | 2026-01-02 03:15:24 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:15:27.822460 | orchestrator | 2026-01-02 03:15:27 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:15:27.823832 | orchestrator | 2026-01-02 03:15:27 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:15:27.823910 | orchestrator | 2026-01-02 03:15:27 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:15:30.868374 | orchestrator | 2026-01-02 03:15:30 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:15:30.869258 | orchestrator | 2026-01-02 03:15:30 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:15:30.869366 | orchestrator | 2026-01-02 03:15:30 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:15:33.919250 | orchestrator | 2026-01-02 03:15:33 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:15:33.920960 | orchestrator | 2026-01-02 03:15:33 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:15:33.921123 | orchestrator | 2026-01-02 03:15:33 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:15:36.968781 | orchestrator | 2026-01-02 03:15:36 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:15:36.970289 | orchestrator | 2026-01-02 03:15:36 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:15:36.970331 | orchestrator | 2026-01-02 03:15:36 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:15:40.016971 | orchestrator | 2026-01-02 03:15:40 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:15:40.018094 | orchestrator | 2026-01-02 03:15:40 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:15:40.018129 | orchestrator | 2026-01-02 03:15:40 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:15:43.066550 | orchestrator | 2026-01-02 03:15:43 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:15:43.067733 | orchestrator | 2026-01-02 03:15:43 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:15:43.067816 | orchestrator | 2026-01-02 03:15:43 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:15:46.110931 | orchestrator | 2026-01-02 03:15:46 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:15:46.113653 | orchestrator | 2026-01-02 03:15:46 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:15:46.113714 | orchestrator | 2026-01-02 03:15:46 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:15:49.163087 | orchestrator | 2026-01-02 03:15:49 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:15:49.164766 | orchestrator | 2026-01-02 03:15:49 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:15:49.164973 | orchestrator | 2026-01-02 03:15:49 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:15:52.219714 | orchestrator | 2026-01-02 03:15:52 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:15:52.221135 | orchestrator | 2026-01-02 03:15:52 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:15:52.221295 | orchestrator | 2026-01-02 03:15:52 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:15:55.266426 | orchestrator | 2026-01-02 03:15:55 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:15:55.267238 | orchestrator | 2026-01-02 03:15:55 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:15:55.267275 | orchestrator | 2026-01-02 03:15:55 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:15:58.314878 | orchestrator | 2026-01-02 03:15:58 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:15:58.317319 | orchestrator | 2026-01-02 03:15:58 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:15:58.317360 | orchestrator | 2026-01-02 03:15:58 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:16:01.364317 | orchestrator | 2026-01-02 03:16:01 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:16:01.365717 | orchestrator | 2026-01-02 03:16:01 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:16:01.365757 | orchestrator | 2026-01-02 03:16:01 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:16:04.412799 | orchestrator | 2026-01-02 03:16:04 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:16:04.413818 | orchestrator | 2026-01-02 03:16:04 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:16:04.414096 | orchestrator | 2026-01-02 03:16:04 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:16:07.457150 | orchestrator | 2026-01-02 03:16:07 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:16:07.458534 | orchestrator | 2026-01-02 03:16:07 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:16:07.458566 | orchestrator | 2026-01-02 03:16:07 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:16:10.499404 | orchestrator | 2026-01-02 03:16:10 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:16:10.500131 | orchestrator | 2026-01-02 03:16:10 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:16:10.500174 | orchestrator | 2026-01-02 03:16:10 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:16:13.546275 | orchestrator | 2026-01-02 03:16:13 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:16:13.547193 | orchestrator | 2026-01-02 03:16:13 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:16:13.547276 | orchestrator | 2026-01-02 03:16:13 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:16:16.591149 | orchestrator | 2026-01-02 03:16:16 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:16:16.593152 | orchestrator | 2026-01-02 03:16:16 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:16:16.593209 | orchestrator | 2026-01-02 03:16:16 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:16:19.646272 | orchestrator | 2026-01-02 03:16:19 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:16:19.648582 | orchestrator | 2026-01-02 03:16:19 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:16:19.648622 | orchestrator | 2026-01-02 03:16:19 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:16:22.699516 | orchestrator | 2026-01-02 03:16:22 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:16:22.701253 | orchestrator | 2026-01-02 03:16:22 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:16:22.701324 | orchestrator | 2026-01-02 03:16:22 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:16:25.746180 | orchestrator | 2026-01-02 03:16:25 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:16:25.749637 | orchestrator | 2026-01-02 03:16:25 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:16:25.749683 | orchestrator | 2026-01-02 03:16:25 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:16:28.796517 | orchestrator | 2026-01-02 03:16:28 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:16:28.798343 | orchestrator | 2026-01-02 03:16:28 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:16:28.798390 | orchestrator | 2026-01-02 03:16:28 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:16:31.839436 | orchestrator | 2026-01-02 03:16:31 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:16:31.839614 | orchestrator | 2026-01-02 03:16:31 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:16:31.839653 | orchestrator | 2026-01-02 03:16:31 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:16:34.891061 | orchestrator | 2026-01-02 03:16:34 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:16:34.892155 | orchestrator | 2026-01-02 03:16:34 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:16:34.892196 | orchestrator | 2026-01-02 03:16:34 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:16:37.937890 | orchestrator | 2026-01-02 03:16:37 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:16:37.938736 | orchestrator | 2026-01-02 03:16:37 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:16:37.938781 | orchestrator | 2026-01-02 03:16:37 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:16:40.983075 | orchestrator | 2026-01-02 03:16:40 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:16:40.984924 | orchestrator | 2026-01-02 03:16:40 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:16:40.984984 | orchestrator | 2026-01-02 03:16:40 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:16:44.043017 | orchestrator | 2026-01-02 03:16:44 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:16:44.043898 | orchestrator | 2026-01-02 03:16:44 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:16:44.044099 | orchestrator | 2026-01-02 03:16:44 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:16:47.084113 | orchestrator | 2026-01-02 03:16:47 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:16:47.084904 | orchestrator | 2026-01-02 03:16:47 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:16:47.084948 | orchestrator | 2026-01-02 03:16:47 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:16:50.133895 | orchestrator | 2026-01-02 03:16:50 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:16:50.135390 | orchestrator | 2026-01-02 03:16:50 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:16:50.135493 | orchestrator | 2026-01-02 03:16:50 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:16:53.181217 | orchestrator | 2026-01-02 03:16:53 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:16:53.182566 | orchestrator | 2026-01-02 03:16:53 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:16:53.182611 | orchestrator | 2026-01-02 03:16:53 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:16:56.230988 | orchestrator | 2026-01-02 03:16:56 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:16:56.232217 | orchestrator | 2026-01-02 03:16:56 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:16:56.232291 | orchestrator | 2026-01-02 03:16:56 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:16:59.272361 | orchestrator | 2026-01-02 03:16:59 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:16:59.274446 | orchestrator | 2026-01-02 03:16:59 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:16:59.274490 | orchestrator | 2026-01-02 03:16:59 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:17:02.321048 | orchestrator | 2026-01-02 03:17:02 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:17:02.323783 | orchestrator | 2026-01-02 03:17:02 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:17:02.323889 | orchestrator | 2026-01-02 03:17:02 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:17:05.371482 | orchestrator | 2026-01-02 03:17:05 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:17:05.372533 | orchestrator | 2026-01-02 03:17:05 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:17:05.372776 | orchestrator | 2026-01-02 03:17:05 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:17:08.423766 | orchestrator | 2026-01-02 03:17:08 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:17:08.425434 | orchestrator | 2026-01-02 03:17:08 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:17:08.425494 | orchestrator | 2026-01-02 03:17:08 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:17:11.473182 | orchestrator | 2026-01-02 03:17:11 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:17:11.474960 | orchestrator | 2026-01-02 03:17:11 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:17:11.475008 | orchestrator | 2026-01-02 03:17:11 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:17:14.523461 | orchestrator | 2026-01-02 03:17:14 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:17:14.526604 | orchestrator | 2026-01-02 03:17:14 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:17:14.526698 | orchestrator | 2026-01-02 03:17:14 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:17:17.575120 | orchestrator | 2026-01-02 03:17:17 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:17:17.575959 | orchestrator | 2026-01-02 03:17:17 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:17:17.576108 | orchestrator | 2026-01-02 03:17:17 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:17:20.622249 | orchestrator | 2026-01-02 03:17:20 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:17:20.622814 | orchestrator | 2026-01-02 03:17:20 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:17:20.622921 | orchestrator | 2026-01-02 03:17:20 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:17:23.672786 | orchestrator | 2026-01-02 03:17:23 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:17:23.673750 | orchestrator | 2026-01-02 03:17:23 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:17:23.673906 | orchestrator | 2026-01-02 03:17:23 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:17:26.722172 | orchestrator | 2026-01-02 03:17:26 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:17:26.724765 | orchestrator | 2026-01-02 03:17:26 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:17:26.725012 | orchestrator | 2026-01-02 03:17:26 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:17:29.768158 | orchestrator | 2026-01-02 03:17:29 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:17:29.769389 | orchestrator | 2026-01-02 03:17:29 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:17:29.769433 | orchestrator | 2026-01-02 03:17:29 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:17:32.816901 | orchestrator | 2026-01-02 03:17:32 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:17:32.818222 | orchestrator | 2026-01-02 03:17:32 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:17:32.818342 | orchestrator | 2026-01-02 03:17:32 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:17:35.867153 | orchestrator | 2026-01-02 03:17:35 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:17:35.868508 | orchestrator | 2026-01-02 03:17:35 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:17:35.868715 | orchestrator | 2026-01-02 03:17:35 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:17:38.917305 | orchestrator | 2026-01-02 03:17:38 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:17:38.918729 | orchestrator | 2026-01-02 03:17:38 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:17:38.918988 | orchestrator | 2026-01-02 03:17:38 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:17:41.964599 | orchestrator | 2026-01-02 03:17:41 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:17:41.966008 | orchestrator | 2026-01-02 03:17:41 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:17:41.966147 | orchestrator | 2026-01-02 03:17:41 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:17:45.015342 | orchestrator | 2026-01-02 03:17:45 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:17:45.017101 | orchestrator | 2026-01-02 03:17:45 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:17:45.017212 | orchestrator | 2026-01-02 03:17:45 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:17:48.062516 | orchestrator | 2026-01-02 03:17:48 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:17:48.064299 | orchestrator | 2026-01-02 03:17:48 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:17:48.064446 | orchestrator | 2026-01-02 03:17:48 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:17:51.112193 | orchestrator | 2026-01-02 03:17:51 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:17:51.113582 | orchestrator | 2026-01-02 03:17:51 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:17:51.113634 | orchestrator | 2026-01-02 03:17:51 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:17:54.152053 | orchestrator | 2026-01-02 03:17:54 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:17:54.153820 | orchestrator | 2026-01-02 03:17:54 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:17:54.154125 | orchestrator | 2026-01-02 03:17:54 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:17:57.200861 | orchestrator | 2026-01-02 03:17:57 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:17:57.202494 | orchestrator | 2026-01-02 03:17:57 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:17:57.202716 | orchestrator | 2026-01-02 03:17:57 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:18:00.249305 | orchestrator | 2026-01-02 03:18:00 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:18:00.250349 | orchestrator | 2026-01-02 03:18:00 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:18:00.250411 | orchestrator | 2026-01-02 03:18:00 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:18:03.296162 | orchestrator | 2026-01-02 03:18:03 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:18:03.298494 | orchestrator | 2026-01-02 03:18:03 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:18:03.298723 | orchestrator | 2026-01-02 03:18:03 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:18:06.350086 | orchestrator | 2026-01-02 03:18:06 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:18:06.353139 | orchestrator | 2026-01-02 03:18:06 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:18:06.353236 | orchestrator | 2026-01-02 03:18:06 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:18:09.398291 | orchestrator | 2026-01-02 03:18:09 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:18:09.399613 | orchestrator | 2026-01-02 03:18:09 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:18:09.399724 | orchestrator | 2026-01-02 03:18:09 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:18:12.441742 | orchestrator | 2026-01-02 03:18:12 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:18:12.444000 | orchestrator | 2026-01-02 03:18:12 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:18:12.444101 | orchestrator | 2026-01-02 03:18:12 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:18:15.486477 | orchestrator | 2026-01-02 03:18:15 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:18:15.487511 | orchestrator | 2026-01-02 03:18:15 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:18:15.487546 | orchestrator | 2026-01-02 03:18:15 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:18:18.536007 | orchestrator | 2026-01-02 03:18:18 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:18:18.536760 | orchestrator | 2026-01-02 03:18:18 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:18:18.536858 | orchestrator | 2026-01-02 03:18:18 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:18:21.580732 | orchestrator | 2026-01-02 03:18:21 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:18:21.582768 | orchestrator | 2026-01-02 03:18:21 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:18:21.582854 | orchestrator | 2026-01-02 03:18:21 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:18:24.628101 | orchestrator | 2026-01-02 03:18:24 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:18:24.629525 | orchestrator | 2026-01-02 03:18:24 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:18:24.629572 | orchestrator | 2026-01-02 03:18:24 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:18:27.673226 | orchestrator | 2026-01-02 03:18:27 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:18:27.674759 | orchestrator | 2026-01-02 03:18:27 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:18:27.674795 | orchestrator | 2026-01-02 03:18:27 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:18:30.719459 | orchestrator | 2026-01-02 03:18:30 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:18:30.721925 | orchestrator | 2026-01-02 03:18:30 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:18:30.722073 | orchestrator | 2026-01-02 03:18:30 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:18:33.766405 | orchestrator | 2026-01-02 03:18:33 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:18:33.767965 | orchestrator | 2026-01-02 03:18:33 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:18:33.768000 | orchestrator | 2026-01-02 03:18:33 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:18:36.816022 | orchestrator | 2026-01-02 03:18:36 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:18:36.816938 | orchestrator | 2026-01-02 03:18:36 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:18:36.816969 | orchestrator | 2026-01-02 03:18:36 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:18:39.860192 | orchestrator | 2026-01-02 03:18:39 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:18:39.862514 | orchestrator | 2026-01-02 03:18:39 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:18:39.862564 | orchestrator | 2026-01-02 03:18:39 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:18:42.912720 | orchestrator | 2026-01-02 03:18:42 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:18:42.915057 | orchestrator | 2026-01-02 03:18:42 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:18:42.915119 | orchestrator | 2026-01-02 03:18:42 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:18:45.954072 | orchestrator | 2026-01-02 03:18:45 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:18:45.955532 | orchestrator | 2026-01-02 03:18:45 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:18:45.955647 | orchestrator | 2026-01-02 03:18:45 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:18:49.002260 | orchestrator | 2026-01-02 03:18:49 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:18:49.003766 | orchestrator | 2026-01-02 03:18:49 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:18:49.003815 | orchestrator | 2026-01-02 03:18:49 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:18:52.051937 | orchestrator | 2026-01-02 03:18:52 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:18:52.053471 | orchestrator | 2026-01-02 03:18:52 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:18:52.053508 | orchestrator | 2026-01-02 03:18:52 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:18:55.096331 | orchestrator | 2026-01-02 03:18:55 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:18:55.096978 | orchestrator | 2026-01-02 03:18:55 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:18:55.097211 | orchestrator | 2026-01-02 03:18:55 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:18:58.140350 | orchestrator | 2026-01-02 03:18:58 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:18:58.141291 | orchestrator | 2026-01-02 03:18:58 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:18:58.141316 | orchestrator | 2026-01-02 03:18:58 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:19:01.180580 | orchestrator | 2026-01-02 03:19:01 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:19:01.180924 | orchestrator | 2026-01-02 03:19:01 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:19:01.180952 | orchestrator | 2026-01-02 03:19:01 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:19:04.229020 | orchestrator | 2026-01-02 03:19:04 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:19:04.230435 | orchestrator | 2026-01-02 03:19:04 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:19:04.230566 | orchestrator | 2026-01-02 03:19:04 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:19:07.278891 | orchestrator | 2026-01-02 03:19:07 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:19:07.281121 | orchestrator | 2026-01-02 03:19:07 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:19:07.281171 | orchestrator | 2026-01-02 03:19:07 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:19:10.328610 | orchestrator | 2026-01-02 03:19:10 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:19:10.331211 | orchestrator | 2026-01-02 03:19:10 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:19:10.331368 | orchestrator | 2026-01-02 03:19:10 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:19:13.380455 | orchestrator | 2026-01-02 03:19:13 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:19:13.382415 | orchestrator | 2026-01-02 03:19:13 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:19:13.382453 | orchestrator | 2026-01-02 03:19:13 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:19:16.432084 | orchestrator | 2026-01-02 03:19:16 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:19:16.433127 | orchestrator | 2026-01-02 03:19:16 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:19:16.433169 | orchestrator | 2026-01-02 03:19:16 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:19:19.478440 | orchestrator | 2026-01-02 03:19:19 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:19:19.481822 | orchestrator | 2026-01-02 03:19:19 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:19:19.481990 | orchestrator | 2026-01-02 03:19:19 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:19:22.531262 | orchestrator | 2026-01-02 03:19:22 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:19:22.535435 | orchestrator | 2026-01-02 03:19:22 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:19:22.535516 | orchestrator | 2026-01-02 03:19:22 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:19:25.584240 | orchestrator | 2026-01-02 03:19:25 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:19:25.587616 | orchestrator | 2026-01-02 03:19:25 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:19:25.587699 | orchestrator | 2026-01-02 03:19:25 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:19:28.636626 | orchestrator | 2026-01-02 03:19:28 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:19:28.639334 | orchestrator | 2026-01-02 03:19:28 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:19:28.639407 | orchestrator | 2026-01-02 03:19:28 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:19:31.686790 | orchestrator | 2026-01-02 03:19:31 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:19:31.689449 | orchestrator | 2026-01-02 03:19:31 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:19:31.689504 | orchestrator | 2026-01-02 03:19:31 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:19:34.728188 | orchestrator | 2026-01-02 03:19:34 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:19:34.731375 | orchestrator | 2026-01-02 03:19:34 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:19:34.731430 | orchestrator | 2026-01-02 03:19:34 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:19:37.775791 | orchestrator | 2026-01-02 03:19:37 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:19:37.777395 | orchestrator | 2026-01-02 03:19:37 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:19:37.777475 | orchestrator | 2026-01-02 03:19:37 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:19:40.823620 | orchestrator | 2026-01-02 03:19:40 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:19:40.825116 | orchestrator | 2026-01-02 03:19:40 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:19:40.825352 | orchestrator | 2026-01-02 03:19:40 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:19:43.872960 | orchestrator | 2026-01-02 03:19:43 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:19:43.873731 | orchestrator | 2026-01-02 03:19:43 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:19:43.874802 | orchestrator | 2026-01-02 03:19:43 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:19:46.920095 | orchestrator | 2026-01-02 03:19:46 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:19:46.921413 | orchestrator | 2026-01-02 03:19:46 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:19:46.921431 | orchestrator | 2026-01-02 03:19:46 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:19:49.966899 | orchestrator | 2026-01-02 03:19:49 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:19:49.967467 | orchestrator | 2026-01-02 03:19:49 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:19:49.967823 | orchestrator | 2026-01-02 03:19:49 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:19:53.025177 | orchestrator | 2026-01-02 03:19:53 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:19:53.026256 | orchestrator | 2026-01-02 03:19:53 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:19:53.026573 | orchestrator | 2026-01-02 03:19:53 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:19:56.069159 | orchestrator | 2026-01-02 03:19:56 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:19:56.071785 | orchestrator | 2026-01-02 03:19:56 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:19:56.071978 | orchestrator | 2026-01-02 03:19:56 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:19:59.112590 | orchestrator | 2026-01-02 03:19:59 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:19:59.114376 | orchestrator | 2026-01-02 03:19:59 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:19:59.114492 | orchestrator | 2026-01-02 03:19:59 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:20:02.172580 | orchestrator | 2026-01-02 03:20:02 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:20:02.174071 | orchestrator | 2026-01-02 03:20:02 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:20:02.174110 | orchestrator | 2026-01-02 03:20:02 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:20:05.219431 | orchestrator | 2026-01-02 03:20:05 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:20:05.221338 | orchestrator | 2026-01-02 03:20:05 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:20:05.221775 | orchestrator | 2026-01-02 03:20:05 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:20:08.270662 | orchestrator | 2026-01-02 03:20:08 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:20:08.271458 | orchestrator | 2026-01-02 03:20:08 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:20:08.271555 | orchestrator | 2026-01-02 03:20:08 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:20:11.313272 | orchestrator | 2026-01-02 03:20:11 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:20:11.314729 | orchestrator | 2026-01-02 03:20:11 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:20:11.315004 | orchestrator | 2026-01-02 03:20:11 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:20:14.359490 | orchestrator | 2026-01-02 03:20:14 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:20:14.361694 | orchestrator | 2026-01-02 03:20:14 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:20:14.361742 | orchestrator | 2026-01-02 03:20:14 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:20:17.403550 | orchestrator | 2026-01-02 03:20:17 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:20:17.405161 | orchestrator | 2026-01-02 03:20:17 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:20:17.405223 | orchestrator | 2026-01-02 03:20:17 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:20:20.450114 | orchestrator | 2026-01-02 03:20:20 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:20:20.452573 | orchestrator | 2026-01-02 03:20:20 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:20:20.452917 | orchestrator | 2026-01-02 03:20:20 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:20:23.501204 | orchestrator | 2026-01-02 03:20:23 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:20:23.503206 | orchestrator | 2026-01-02 03:20:23 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:20:23.503311 | orchestrator | 2026-01-02 03:20:23 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:20:26.547874 | orchestrator | 2026-01-02 03:20:26 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:20:26.549070 | orchestrator | 2026-01-02 03:20:26 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:20:26.549098 | orchestrator | 2026-01-02 03:20:26 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:20:29.592178 | orchestrator | 2026-01-02 03:20:29 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:20:29.595508 | orchestrator | 2026-01-02 03:20:29 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:20:29.595650 | orchestrator | 2026-01-02 03:20:29 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:20:32.633199 | orchestrator | 2026-01-02 03:20:32 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:20:32.634467 | orchestrator | 2026-01-02 03:20:32 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:20:32.634501 | orchestrator | 2026-01-02 03:20:32 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:20:35.684396 | orchestrator | 2026-01-02 03:20:35 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:20:35.686372 | orchestrator | 2026-01-02 03:20:35 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:20:35.686437 | orchestrator | 2026-01-02 03:20:35 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:20:38.733165 | orchestrator | 2026-01-02 03:20:38 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:20:38.735476 | orchestrator | 2026-01-02 03:20:38 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:20:38.735571 | orchestrator | 2026-01-02 03:20:38 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:20:41.790476 | orchestrator | 2026-01-02 03:20:41 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:20:41.791085 | orchestrator | 2026-01-02 03:20:41 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:20:41.791865 | orchestrator | 2026-01-02 03:20:41 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:20:44.834886 | orchestrator | 2026-01-02 03:20:44 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:20:44.836589 | orchestrator | 2026-01-02 03:20:44 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:20:44.836637 | orchestrator | 2026-01-02 03:20:44 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:20:47.880756 | orchestrator | 2026-01-02 03:20:47 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:20:47.881631 | orchestrator | 2026-01-02 03:20:47 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:20:47.881758 | orchestrator | 2026-01-02 03:20:47 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:20:50.928988 | orchestrator | 2026-01-02 03:20:50 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:20:50.930569 | orchestrator | 2026-01-02 03:20:50 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:20:50.930699 | orchestrator | 2026-01-02 03:20:50 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:20:53.973852 | orchestrator | 2026-01-02 03:20:53 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:20:53.975934 | orchestrator | 2026-01-02 03:20:53 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:20:53.976041 | orchestrator | 2026-01-02 03:20:53 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:20:57.027431 | orchestrator | 2026-01-02 03:20:57 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:20:57.028762 | orchestrator | 2026-01-02 03:20:57 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:20:57.028804 | orchestrator | 2026-01-02 03:20:57 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:21:00.064995 | orchestrator | 2026-01-02 03:21:00 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:21:00.066970 | orchestrator | 2026-01-02 03:21:00 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:21:00.067116 | orchestrator | 2026-01-02 03:21:00 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:21:03.114456 | orchestrator | 2026-01-02 03:21:03 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:21:03.117547 | orchestrator | 2026-01-02 03:21:03 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:21:03.117589 | orchestrator | 2026-01-02 03:21:03 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:21:06.164438 | orchestrator | 2026-01-02 03:21:06 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:21:06.166158 | orchestrator | 2026-01-02 03:21:06 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:21:06.166214 | orchestrator | 2026-01-02 03:21:06 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:21:09.210278 | orchestrator | 2026-01-02 03:21:09 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:21:09.213195 | orchestrator | 2026-01-02 03:21:09 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:21:09.213222 | orchestrator | 2026-01-02 03:21:09 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:21:12.257729 | orchestrator | 2026-01-02 03:21:12 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:21:12.259068 | orchestrator | 2026-01-02 03:21:12 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:21:12.259102 | orchestrator | 2026-01-02 03:21:12 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:21:15.304904 | orchestrator | 2026-01-02 03:21:15 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:21:15.307528 | orchestrator | 2026-01-02 03:21:15 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:21:15.307597 | orchestrator | 2026-01-02 03:21:15 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:21:18.351328 | orchestrator | 2026-01-02 03:21:18 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:21:18.352547 | orchestrator | 2026-01-02 03:21:18 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:21:18.352586 | orchestrator | 2026-01-02 03:21:18 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:21:21.403489 | orchestrator | 2026-01-02 03:21:21 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:21:21.405898 | orchestrator | 2026-01-02 03:21:21 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:21:21.405957 | orchestrator | 2026-01-02 03:21:21 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:21:24.451933 | orchestrator | 2026-01-02 03:21:24 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:21:24.453023 | orchestrator | 2026-01-02 03:21:24 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:21:24.453182 | orchestrator | 2026-01-02 03:21:24 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:21:27.503315 | orchestrator | 2026-01-02 03:21:27 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:21:27.504808 | orchestrator | 2026-01-02 03:21:27 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:21:27.504900 | orchestrator | 2026-01-02 03:21:27 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:21:30.549551 | orchestrator | 2026-01-02 03:21:30 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:21:30.550630 | orchestrator | 2026-01-02 03:21:30 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:21:30.550750 | orchestrator | 2026-01-02 03:21:30 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:21:33.610477 | orchestrator | 2026-01-02 03:21:33 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:21:33.612012 | orchestrator | 2026-01-02 03:21:33 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:21:33.612093 | orchestrator | 2026-01-02 03:21:33 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:21:36.661949 | orchestrator | 2026-01-02 03:21:36 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:21:36.664194 | orchestrator | 2026-01-02 03:21:36 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:21:36.664325 | orchestrator | 2026-01-02 03:21:36 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:21:39.714524 | orchestrator | 2026-01-02 03:21:39 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:21:39.716787 | orchestrator | 2026-01-02 03:21:39 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:21:39.716928 | orchestrator | 2026-01-02 03:21:39 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:21:42.763830 | orchestrator | 2026-01-02 03:21:42 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:21:42.766285 | orchestrator | 2026-01-02 03:21:42 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:21:42.766588 | orchestrator | 2026-01-02 03:21:42 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:21:45.822355 | orchestrator | 2026-01-02 03:21:45 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:21:45.822851 | orchestrator | 2026-01-02 03:21:45 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:21:45.822904 | orchestrator | 2026-01-02 03:21:45 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:21:48.870744 | orchestrator | 2026-01-02 03:21:48 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:21:48.871573 | orchestrator | 2026-01-02 03:21:48 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:21:48.871835 | orchestrator | 2026-01-02 03:21:48 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:21:51.919729 | orchestrator | 2026-01-02 03:21:51 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:21:51.921951 | orchestrator | 2026-01-02 03:21:51 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:21:51.922127 | orchestrator | 2026-01-02 03:21:51 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:21:54.961887 | orchestrator | 2026-01-02 03:21:54 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:21:54.963850 | orchestrator | 2026-01-02 03:21:54 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:21:54.963952 | orchestrator | 2026-01-02 03:21:54 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:21:58.016297 | orchestrator | 2026-01-02 03:21:58 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:21:58.017965 | orchestrator | 2026-01-02 03:21:58 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:21:58.018142 | orchestrator | 2026-01-02 03:21:58 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:22:01.052917 | orchestrator | 2026-01-02 03:22:01 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:22:01.054174 | orchestrator | 2026-01-02 03:22:01 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:22:01.054321 | orchestrator | 2026-01-02 03:22:01 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:22:04.098390 | orchestrator | 2026-01-02 03:22:04 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:22:04.100301 | orchestrator | 2026-01-02 03:22:04 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:22:04.100412 | orchestrator | 2026-01-02 03:22:04 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:22:07.154898 | orchestrator | 2026-01-02 03:22:07 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:22:07.157251 | orchestrator | 2026-01-02 03:22:07 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:22:07.157308 | orchestrator | 2026-01-02 03:22:07 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:22:10.198561 | orchestrator | 2026-01-02 03:22:10 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:22:10.198950 | orchestrator | 2026-01-02 03:22:10 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:22:10.198981 | orchestrator | 2026-01-02 03:22:10 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:22:13.243052 | orchestrator | 2026-01-02 03:22:13 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:22:13.245854 | orchestrator | 2026-01-02 03:22:13 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:22:13.245930 | orchestrator | 2026-01-02 03:22:13 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:22:16.284426 | orchestrator | 2026-01-02 03:22:16 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:22:16.286455 | orchestrator | 2026-01-02 03:22:16 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:22:16.286580 | orchestrator | 2026-01-02 03:22:16 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:22:19.333376 | orchestrator | 2026-01-02 03:22:19 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:22:19.334853 | orchestrator | 2026-01-02 03:22:19 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:22:19.334932 | orchestrator | 2026-01-02 03:22:19 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:22:22.382278 | orchestrator | 2026-01-02 03:22:22 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:22:22.383508 | orchestrator | 2026-01-02 03:22:22 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:22:22.383638 | orchestrator | 2026-01-02 03:22:22 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:22:25.432943 | orchestrator | 2026-01-02 03:22:25 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:22:25.434334 | orchestrator | 2026-01-02 03:22:25 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:22:25.434407 | orchestrator | 2026-01-02 03:22:25 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:22:28.489183 | orchestrator | 2026-01-02 03:22:28 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:22:28.491451 | orchestrator | 2026-01-02 03:22:28 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:22:28.491507 | orchestrator | 2026-01-02 03:22:28 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:22:31.534359 | orchestrator | 2026-01-02 03:22:31 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:22:31.535752 | orchestrator | 2026-01-02 03:22:31 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:22:31.535816 | orchestrator | 2026-01-02 03:22:31 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:22:34.579917 | orchestrator | 2026-01-02 03:22:34 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:22:34.581473 | orchestrator | 2026-01-02 03:22:34 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:22:34.581510 | orchestrator | 2026-01-02 03:22:34 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:22:37.630284 | orchestrator | 2026-01-02 03:22:37 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:22:37.632196 | orchestrator | 2026-01-02 03:22:37 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:22:37.632255 | orchestrator | 2026-01-02 03:22:37 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:22:40.681494 | orchestrator | 2026-01-02 03:22:40 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:22:40.683076 | orchestrator | 2026-01-02 03:22:40 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:22:40.683208 | orchestrator | 2026-01-02 03:22:40 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:22:43.733228 | orchestrator | 2026-01-02 03:22:43 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:22:43.733771 | orchestrator | 2026-01-02 03:22:43 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:22:43.733812 | orchestrator | 2026-01-02 03:22:43 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:22:46.779749 | orchestrator | 2026-01-02 03:22:46 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:22:46.781222 | orchestrator | 2026-01-02 03:22:46 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:22:46.781353 | orchestrator | 2026-01-02 03:22:46 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:22:49.829719 | orchestrator | 2026-01-02 03:22:49 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:22:49.831047 | orchestrator | 2026-01-02 03:22:49 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:22:49.831087 | orchestrator | 2026-01-02 03:22:49 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:22:52.876094 | orchestrator | 2026-01-02 03:22:52 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:22:52.878562 | orchestrator | 2026-01-02 03:22:52 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:22:52.878616 | orchestrator | 2026-01-02 03:22:52 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:22:55.923070 | orchestrator | 2026-01-02 03:22:55 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:22:55.924964 | orchestrator | 2026-01-02 03:22:55 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:22:55.925028 | orchestrator | 2026-01-02 03:22:55 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:22:58.981286 | orchestrator | 2026-01-02 03:22:58 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:22:58.982870 | orchestrator | 2026-01-02 03:22:58 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:22:58.982920 | orchestrator | 2026-01-02 03:22:58 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:23:02.060334 | orchestrator | 2026-01-02 03:23:02 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:23:02.060462 | orchestrator | 2026-01-02 03:23:02 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:23:02.060477 | orchestrator | 2026-01-02 03:23:02 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:23:05.098831 | orchestrator | 2026-01-02 03:23:05 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:23:05.100413 | orchestrator | 2026-01-02 03:23:05 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:23:05.100489 | orchestrator | 2026-01-02 03:23:05 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:23:08.153276 | orchestrator | 2026-01-02 03:23:08 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:23:08.154297 | orchestrator | 2026-01-02 03:23:08 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:23:08.154346 | orchestrator | 2026-01-02 03:23:08 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:23:11.202088 | orchestrator | 2026-01-02 03:23:11 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:23:11.203994 | orchestrator | 2026-01-02 03:23:11 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:23:11.204035 | orchestrator | 2026-01-02 03:23:11 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:23:14.254957 | orchestrator | 2026-01-02 03:23:14 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:23:14.256949 | orchestrator | 2026-01-02 03:23:14 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:23:14.256995 | orchestrator | 2026-01-02 03:23:14 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:23:17.305363 | orchestrator | 2026-01-02 03:23:17 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:23:17.306882 | orchestrator | 2026-01-02 03:23:17 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:23:17.306925 | orchestrator | 2026-01-02 03:23:17 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:23:20.356840 | orchestrator | 2026-01-02 03:23:20 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:23:20.358435 | orchestrator | 2026-01-02 03:23:20 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:23:20.359575 | orchestrator | 2026-01-02 03:23:20 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:23:23.408777 | orchestrator | 2026-01-02 03:23:23 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:23:23.410141 | orchestrator | 2026-01-02 03:23:23 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:23:23.410445 | orchestrator | 2026-01-02 03:23:23 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:23:26.454630 | orchestrator | 2026-01-02 03:23:26 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:23:26.455310 | orchestrator | 2026-01-02 03:23:26 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:23:26.455348 | orchestrator | 2026-01-02 03:23:26 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:23:29.499546 | orchestrator | 2026-01-02 03:23:29 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:23:29.501345 | orchestrator | 2026-01-02 03:23:29 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:23:29.501400 | orchestrator | 2026-01-02 03:23:29 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:23:32.549260 | orchestrator | 2026-01-02 03:23:32 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:23:32.550678 | orchestrator | 2026-01-02 03:23:32 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:23:32.550733 | orchestrator | 2026-01-02 03:23:32 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:23:35.591829 | orchestrator | 2026-01-02 03:23:35 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:23:35.593082 | orchestrator | 2026-01-02 03:23:35 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:23:35.593111 | orchestrator | 2026-01-02 03:23:35 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:23:38.639028 | orchestrator | 2026-01-02 03:23:38 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:23:38.639890 | orchestrator | 2026-01-02 03:23:38 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:23:38.640023 | orchestrator | 2026-01-02 03:23:38 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:23:41.681270 | orchestrator | 2026-01-02 03:23:41 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:23:41.682969 | orchestrator | 2026-01-02 03:23:41 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:23:41.683004 | orchestrator | 2026-01-02 03:23:41 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:23:44.730950 | orchestrator | 2026-01-02 03:23:44 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:23:44.732212 | orchestrator | 2026-01-02 03:23:44 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:23:44.732350 | orchestrator | 2026-01-02 03:23:44 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:23:47.782909 | orchestrator | 2026-01-02 03:23:47 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:23:47.784732 | orchestrator | 2026-01-02 03:23:47 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:23:47.784823 | orchestrator | 2026-01-02 03:23:47 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:23:50.834452 | orchestrator | 2026-01-02 03:23:50 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:23:50.836024 | orchestrator | 2026-01-02 03:23:50 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:23:50.836069 | orchestrator | 2026-01-02 03:23:50 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:23:53.882926 | orchestrator | 2026-01-02 03:23:53 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:23:53.884570 | orchestrator | 2026-01-02 03:23:53 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:23:53.884606 | orchestrator | 2026-01-02 03:23:53 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:23:56.931288 | orchestrator | 2026-01-02 03:23:56 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:23:56.933277 | orchestrator | 2026-01-02 03:23:56 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:23:56.933303 | orchestrator | 2026-01-02 03:23:56 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:23:59.984401 | orchestrator | 2026-01-02 03:23:59 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:23:59.985641 | orchestrator | 2026-01-02 03:23:59 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:23:59.985698 | orchestrator | 2026-01-02 03:23:59 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:24:03.039832 | orchestrator | 2026-01-02 03:24:03 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:24:03.041406 | orchestrator | 2026-01-02 03:24:03 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:24:03.041458 | orchestrator | 2026-01-02 03:24:03 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:24:06.091055 | orchestrator | 2026-01-02 03:24:06 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:24:06.091742 | orchestrator | 2026-01-02 03:24:06 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:24:06.091780 | orchestrator | 2026-01-02 03:24:06 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:24:09.138932 | orchestrator | 2026-01-02 03:24:09 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:24:09.141178 | orchestrator | 2026-01-02 03:24:09 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:24:09.141308 | orchestrator | 2026-01-02 03:24:09 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:24:12.193300 | orchestrator | 2026-01-02 03:24:12 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:24:12.194287 | orchestrator | 2026-01-02 03:24:12 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:24:12.194427 | orchestrator | 2026-01-02 03:24:12 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:24:15.245582 | orchestrator | 2026-01-02 03:24:15 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:24:15.246437 | orchestrator | 2026-01-02 03:24:15 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:24:15.246812 | orchestrator | 2026-01-02 03:24:15 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:24:18.294412 | orchestrator | 2026-01-02 03:24:18 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:24:18.295461 | orchestrator | 2026-01-02 03:24:18 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:24:18.295501 | orchestrator | 2026-01-02 03:24:18 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:24:21.343081 | orchestrator | 2026-01-02 03:24:21 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:24:21.344865 | orchestrator | 2026-01-02 03:24:21 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:24:21.344930 | orchestrator | 2026-01-02 03:24:21 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:24:24.389489 | orchestrator | 2026-01-02 03:24:24 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:24:24.391639 | orchestrator | 2026-01-02 03:24:24 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:24:24.391725 | orchestrator | 2026-01-02 03:24:24 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:24:27.436793 | orchestrator | 2026-01-02 03:24:27 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:24:27.438329 | orchestrator | 2026-01-02 03:24:27 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:24:27.438355 | orchestrator | 2026-01-02 03:24:27 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:24:30.489815 | orchestrator | 2026-01-02 03:24:30 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:24:30.491336 | orchestrator | 2026-01-02 03:24:30 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:24:30.491391 | orchestrator | 2026-01-02 03:24:30 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:24:33.536436 | orchestrator | 2026-01-02 03:24:33 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:24:33.538580 | orchestrator | 2026-01-02 03:24:33 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:24:33.538629 | orchestrator | 2026-01-02 03:24:33 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:24:36.582186 | orchestrator | 2026-01-02 03:24:36 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:24:36.584036 | orchestrator | 2026-01-02 03:24:36 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:24:36.584071 | orchestrator | 2026-01-02 03:24:36 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:24:39.630556 | orchestrator | 2026-01-02 03:24:39 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:24:39.633083 | orchestrator | 2026-01-02 03:24:39 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:24:39.633142 | orchestrator | 2026-01-02 03:24:39 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:24:42.682135 | orchestrator | 2026-01-02 03:24:42 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:24:42.683197 | orchestrator | 2026-01-02 03:24:42 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:24:42.683221 | orchestrator | 2026-01-02 03:24:42 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:24:45.733072 | orchestrator | 2026-01-02 03:24:45 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:24:45.735690 | orchestrator | 2026-01-02 03:24:45 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:24:45.735773 | orchestrator | 2026-01-02 03:24:45 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:24:48.783535 | orchestrator | 2026-01-02 03:24:48 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:24:48.785525 | orchestrator | 2026-01-02 03:24:48 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:24:48.785634 | orchestrator | 2026-01-02 03:24:48 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:24:51.830348 | orchestrator | 2026-01-02 03:24:51 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:24:51.831030 | orchestrator | 2026-01-02 03:24:51 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:24:51.831130 | orchestrator | 2026-01-02 03:24:51 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:24:54.878991 | orchestrator | 2026-01-02 03:24:54 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:24:54.880886 | orchestrator | 2026-01-02 03:24:54 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:24:54.881058 | orchestrator | 2026-01-02 03:24:54 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:24:57.928562 | orchestrator | 2026-01-02 03:24:57 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:24:57.929997 | orchestrator | 2026-01-02 03:24:57 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:24:57.930100 | orchestrator | 2026-01-02 03:24:57 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:25:00.981921 | orchestrator | 2026-01-02 03:25:00 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:25:00.983648 | orchestrator | 2026-01-02 03:25:00 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:25:00.983758 | orchestrator | 2026-01-02 03:25:00 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:25:04.047967 | orchestrator | 2026-01-02 03:25:04 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:25:04.049971 | orchestrator | 2026-01-02 03:25:04 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:25:04.050079 | orchestrator | 2026-01-02 03:25:04 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:25:07.093728 | orchestrator | 2026-01-02 03:25:07 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:25:07.095120 | orchestrator | 2026-01-02 03:25:07 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:25:07.095228 | orchestrator | 2026-01-02 03:25:07 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:25:10.138244 | orchestrator | 2026-01-02 03:25:10 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:25:10.140859 | orchestrator | 2026-01-02 03:25:10 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:25:10.140919 | orchestrator | 2026-01-02 03:25:10 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:25:13.196182 | orchestrator | 2026-01-02 03:25:13 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:25:13.198054 | orchestrator | 2026-01-02 03:25:13 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:25:13.198168 | orchestrator | 2026-01-02 03:25:13 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:25:16.253181 | orchestrator | 2026-01-02 03:25:16 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:25:16.254962 | orchestrator | 2026-01-02 03:25:16 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:25:16.255070 | orchestrator | 2026-01-02 03:25:16 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:25:19.299942 | orchestrator | 2026-01-02 03:25:19 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:25:19.300940 | orchestrator | 2026-01-02 03:25:19 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:25:19.300971 | orchestrator | 2026-01-02 03:25:19 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:25:22.349730 | orchestrator | 2026-01-02 03:25:22 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:25:22.351759 | orchestrator | 2026-01-02 03:25:22 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:25:22.351887 | orchestrator | 2026-01-02 03:25:22 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:25:25.396699 | orchestrator | 2026-01-02 03:25:25 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:25:25.399521 | orchestrator | 2026-01-02 03:25:25 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:25:25.399593 | orchestrator | 2026-01-02 03:25:25 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:25:28.450630 | orchestrator | 2026-01-02 03:25:28 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:25:28.450731 | orchestrator | 2026-01-02 03:25:28 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:25:28.450747 | orchestrator | 2026-01-02 03:25:28 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:25:31.499940 | orchestrator | 2026-01-02 03:25:31 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:25:31.502280 | orchestrator | 2026-01-02 03:25:31 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:25:31.502463 | orchestrator | 2026-01-02 03:25:31 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:25:34.546096 | orchestrator | 2026-01-02 03:25:34 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:25:34.548121 | orchestrator | 2026-01-02 03:25:34 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:25:34.548198 | orchestrator | 2026-01-02 03:25:34 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:25:37.587792 | orchestrator | 2026-01-02 03:25:37 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:25:37.589321 | orchestrator | 2026-01-02 03:25:37 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:25:37.589358 | orchestrator | 2026-01-02 03:25:37 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:25:40.644094 | orchestrator | 2026-01-02 03:25:40 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:25:40.646217 | orchestrator | 2026-01-02 03:25:40 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:25:40.646246 | orchestrator | 2026-01-02 03:25:40 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:25:43.690778 | orchestrator | 2026-01-02 03:25:43 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:25:43.692363 | orchestrator | 2026-01-02 03:25:43 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:25:43.692423 | orchestrator | 2026-01-02 03:25:43 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:25:46.740777 | orchestrator | 2026-01-02 03:25:46 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:25:46.742525 | orchestrator | 2026-01-02 03:25:46 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:25:46.742606 | orchestrator | 2026-01-02 03:25:46 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:25:49.790893 | orchestrator | 2026-01-02 03:25:49 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:25:49.794792 | orchestrator | 2026-01-02 03:25:49 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:25:49.794852 | orchestrator | 2026-01-02 03:25:49 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:25:52.837150 | orchestrator | 2026-01-02 03:25:52 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:25:52.838913 | orchestrator | 2026-01-02 03:25:52 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:25:52.838988 | orchestrator | 2026-01-02 03:25:52 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:25:55.888212 | orchestrator | 2026-01-02 03:25:55 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:25:55.889871 | orchestrator | 2026-01-02 03:25:55 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:25:55.889949 | orchestrator | 2026-01-02 03:25:55 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:25:58.936163 | orchestrator | 2026-01-02 03:25:58 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:25:58.937714 | orchestrator | 2026-01-02 03:25:58 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:25:58.937778 | orchestrator | 2026-01-02 03:25:58 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:26:01.986315 | orchestrator | 2026-01-02 03:26:01 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:26:01.987919 | orchestrator | 2026-01-02 03:26:01 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:26:01.988025 | orchestrator | 2026-01-02 03:26:01 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:26:05.037251 | orchestrator | 2026-01-02 03:26:05 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:26:05.039448 | orchestrator | 2026-01-02 03:26:05 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:26:05.039472 | orchestrator | 2026-01-02 03:26:05 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:26:08.087247 | orchestrator | 2026-01-02 03:26:08 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:26:08.088358 | orchestrator | 2026-01-02 03:26:08 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:26:08.088569 | orchestrator | 2026-01-02 03:26:08 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:26:11.135458 | orchestrator | 2026-01-02 03:26:11 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:26:11.136536 | orchestrator | 2026-01-02 03:26:11 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:26:11.136722 | orchestrator | 2026-01-02 03:26:11 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:26:14.181792 | orchestrator | 2026-01-02 03:26:14 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:26:14.183155 | orchestrator | 2026-01-02 03:26:14 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:26:14.183321 | orchestrator | 2026-01-02 03:26:14 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:26:17.229090 | orchestrator | 2026-01-02 03:26:17 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:26:17.230294 | orchestrator | 2026-01-02 03:26:17 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:26:17.230321 | orchestrator | 2026-01-02 03:26:17 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:26:20.267763 | orchestrator | 2026-01-02 03:26:20 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:26:20.268470 | orchestrator | 2026-01-02 03:26:20 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:26:20.268486 | orchestrator | 2026-01-02 03:26:20 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:26:23.317936 | orchestrator | 2026-01-02 03:26:23 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:26:23.319447 | orchestrator | 2026-01-02 03:26:23 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:26:23.319463 | orchestrator | 2026-01-02 03:26:23 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:26:26.367601 | orchestrator | 2026-01-02 03:26:26 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:26:26.369143 | orchestrator | 2026-01-02 03:26:26 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:26:26.369168 | orchestrator | 2026-01-02 03:26:26 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:26:29.414512 | orchestrator | 2026-01-02 03:26:29 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:26:29.415329 | orchestrator | 2026-01-02 03:26:29 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:26:29.415405 | orchestrator | 2026-01-02 03:26:29 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:26:32.458702 | orchestrator | 2026-01-02 03:26:32 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:26:32.460180 | orchestrator | 2026-01-02 03:26:32 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:26:32.460221 | orchestrator | 2026-01-02 03:26:32 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:26:35.501552 | orchestrator | 2026-01-02 03:26:35 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:26:35.503056 | orchestrator | 2026-01-02 03:26:35 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:26:35.503393 | orchestrator | 2026-01-02 03:26:35 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:26:38.549444 | orchestrator | 2026-01-02 03:26:38 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:26:38.550516 | orchestrator | 2026-01-02 03:26:38 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:26:38.550606 | orchestrator | 2026-01-02 03:26:38 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:26:41.597956 | orchestrator | 2026-01-02 03:26:41 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:26:41.599901 | orchestrator | 2026-01-02 03:26:41 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:26:41.600226 | orchestrator | 2026-01-02 03:26:41 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:26:44.646791 | orchestrator | 2026-01-02 03:26:44 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:26:44.648391 | orchestrator | 2026-01-02 03:26:44 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:26:44.648477 | orchestrator | 2026-01-02 03:26:44 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:26:47.694570 | orchestrator | 2026-01-02 03:26:47 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:26:47.695801 | orchestrator | 2026-01-02 03:26:47 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:26:47.695943 | orchestrator | 2026-01-02 03:26:47 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:26:50.741674 | orchestrator | 2026-01-02 03:26:50 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:26:50.742867 | orchestrator | 2026-01-02 03:26:50 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:26:50.742923 | orchestrator | 2026-01-02 03:26:50 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:26:53.787138 | orchestrator | 2026-01-02 03:26:53 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:26:53.788484 | orchestrator | 2026-01-02 03:26:53 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:26:53.788694 | orchestrator | 2026-01-02 03:26:53 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:26:56.829741 | orchestrator | 2026-01-02 03:26:56 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:26:56.832108 | orchestrator | 2026-01-02 03:26:56 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:26:56.832207 | orchestrator | 2026-01-02 03:26:56 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:26:59.873095 | orchestrator | 2026-01-02 03:26:59 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:26:59.874962 | orchestrator | 2026-01-02 03:26:59 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:26:59.875168 | orchestrator | 2026-01-02 03:26:59 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:27:02.919966 | orchestrator | 2026-01-02 03:27:02 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:27:02.920528 | orchestrator | 2026-01-02 03:27:02 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:27:02.920606 | orchestrator | 2026-01-02 03:27:02 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:27:05.968732 | orchestrator | 2026-01-02 03:27:05 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:27:05.970172 | orchestrator | 2026-01-02 03:27:05 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:27:05.970206 | orchestrator | 2026-01-02 03:27:05 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:27:09.013966 | orchestrator | 2026-01-02 03:27:09 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:27:09.015537 | orchestrator | 2026-01-02 03:27:09 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:27:09.015598 | orchestrator | 2026-01-02 03:27:09 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:27:12.069686 | orchestrator | 2026-01-02 03:27:12 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:27:12.069906 | orchestrator | 2026-01-02 03:27:12 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:27:12.069930 | orchestrator | 2026-01-02 03:27:12 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:27:15.118640 | orchestrator | 2026-01-02 03:27:15 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:27:15.120496 | orchestrator | 2026-01-02 03:27:15 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:27:15.120980 | orchestrator | 2026-01-02 03:27:15 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:27:18.170475 | orchestrator | 2026-01-02 03:27:18 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:27:18.172308 | orchestrator | 2026-01-02 03:27:18 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:27:18.172589 | orchestrator | 2026-01-02 03:27:18 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:27:21.215694 | orchestrator | 2026-01-02 03:27:21 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:27:21.216392 | orchestrator | 2026-01-02 03:27:21 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:27:21.216736 | orchestrator | 2026-01-02 03:27:21 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:27:24.256712 | orchestrator | 2026-01-02 03:27:24 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:27:24.258274 | orchestrator | 2026-01-02 03:27:24 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:27:24.258453 | orchestrator | 2026-01-02 03:27:24 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:27:27.305560 | orchestrator | 2026-01-02 03:27:27 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:27:27.307518 | orchestrator | 2026-01-02 03:27:27 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:27:27.307560 | orchestrator | 2026-01-02 03:27:27 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:27:30.350102 | orchestrator | 2026-01-02 03:27:30 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:27:30.352562 | orchestrator | 2026-01-02 03:27:30 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:27:30.352944 | orchestrator | 2026-01-02 03:27:30 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:27:33.401742 | orchestrator | 2026-01-02 03:27:33 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:27:33.403105 | orchestrator | 2026-01-02 03:27:33 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:27:33.403127 | orchestrator | 2026-01-02 03:27:33 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:27:36.451911 | orchestrator | 2026-01-02 03:27:36 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:27:36.454239 | orchestrator | 2026-01-02 03:27:36 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:27:36.454377 | orchestrator | 2026-01-02 03:27:36 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:27:39.503699 | orchestrator | 2026-01-02 03:27:39 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:27:39.505764 | orchestrator | 2026-01-02 03:27:39 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:27:39.505870 | orchestrator | 2026-01-02 03:27:39 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:27:42.553603 | orchestrator | 2026-01-02 03:27:42 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:27:42.555378 | orchestrator | 2026-01-02 03:27:42 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:27:42.555568 | orchestrator | 2026-01-02 03:27:42 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:27:45.603947 | orchestrator | 2026-01-02 03:27:45 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:27:45.604732 | orchestrator | 2026-01-02 03:27:45 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:27:45.604783 | orchestrator | 2026-01-02 03:27:45 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:27:48.657216 | orchestrator | 2026-01-02 03:27:48 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:27:48.658748 | orchestrator | 2026-01-02 03:27:48 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:27:48.658816 | orchestrator | 2026-01-02 03:27:48 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:27:51.703223 | orchestrator | 2026-01-02 03:27:51 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:27:51.704332 | orchestrator | 2026-01-02 03:27:51 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:27:51.704479 | orchestrator | 2026-01-02 03:27:51 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:27:54.750297 | orchestrator | 2026-01-02 03:27:54 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:27:54.752121 | orchestrator | 2026-01-02 03:27:54 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:27:54.752162 | orchestrator | 2026-01-02 03:27:54 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:27:57.800773 | orchestrator | 2026-01-02 03:27:57 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:27:57.803307 | orchestrator | 2026-01-02 03:27:57 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:27:57.803331 | orchestrator | 2026-01-02 03:27:57 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:28:00.851446 | orchestrator | 2026-01-02 03:28:00 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:28:00.852327 | orchestrator | 2026-01-02 03:28:00 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:28:00.852362 | orchestrator | 2026-01-02 03:28:00 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:28:03.892243 | orchestrator | 2026-01-02 03:28:03 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:28:03.892358 | orchestrator | 2026-01-02 03:28:03 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:28:03.892375 | orchestrator | 2026-01-02 03:28:03 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:28:06.941772 | orchestrator | 2026-01-02 03:28:06 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:28:06.943137 | orchestrator | 2026-01-02 03:28:06 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:28:06.943196 | orchestrator | 2026-01-02 03:28:06 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:28:09.983840 | orchestrator | 2026-01-02 03:28:09 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:28:09.985237 | orchestrator | 2026-01-02 03:28:09 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:28:09.985324 | orchestrator | 2026-01-02 03:28:09 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:28:13.042238 | orchestrator | 2026-01-02 03:28:13 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:28:13.044055 | orchestrator | 2026-01-02 03:28:13 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:28:13.044110 | orchestrator | 2026-01-02 03:28:13 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:28:16.088553 | orchestrator | 2026-01-02 03:28:16 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:28:16.090220 | orchestrator | 2026-01-02 03:28:16 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:28:16.090353 | orchestrator | 2026-01-02 03:28:16 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:28:19.133532 | orchestrator | 2026-01-02 03:28:19 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:28:19.134664 | orchestrator | 2026-01-02 03:28:19 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:28:19.134905 | orchestrator | 2026-01-02 03:28:19 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:28:22.181846 | orchestrator | 2026-01-02 03:28:22 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:28:22.183487 | orchestrator | 2026-01-02 03:28:22 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:28:22.183542 | orchestrator | 2026-01-02 03:28:22 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:28:25.227899 | orchestrator | 2026-01-02 03:28:25 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:28:25.228587 | orchestrator | 2026-01-02 03:28:25 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:28:25.228792 | orchestrator | 2026-01-02 03:28:25 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:28:28.278274 | orchestrator | 2026-01-02 03:28:28 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:28:28.279679 | orchestrator | 2026-01-02 03:28:28 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:28:28.279830 | orchestrator | 2026-01-02 03:28:28 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:28:31.326491 | orchestrator | 2026-01-02 03:28:31 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:28:31.328335 | orchestrator | 2026-01-02 03:28:31 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:28:31.328462 | orchestrator | 2026-01-02 03:28:31 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:28:34.371852 | orchestrator | 2026-01-02 03:28:34 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:28:34.373537 | orchestrator | 2026-01-02 03:28:34 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:28:34.373575 | orchestrator | 2026-01-02 03:28:34 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:28:37.420771 | orchestrator | 2026-01-02 03:28:37 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:28:37.422111 | orchestrator | 2026-01-02 03:28:37 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:28:37.422142 | orchestrator | 2026-01-02 03:28:37 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:28:40.469096 | orchestrator | 2026-01-02 03:28:40 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:28:40.470868 | orchestrator | 2026-01-02 03:28:40 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:28:40.470911 | orchestrator | 2026-01-02 03:28:40 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:28:43.517680 | orchestrator | 2026-01-02 03:28:43 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:28:43.519957 | orchestrator | 2026-01-02 03:28:43 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:28:43.520016 | orchestrator | 2026-01-02 03:28:43 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:28:46.560799 | orchestrator | 2026-01-02 03:28:46 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:28:46.563498 | orchestrator | 2026-01-02 03:28:46 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:28:46.563557 | orchestrator | 2026-01-02 03:28:46 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:28:49.605271 | orchestrator | 2026-01-02 03:28:49 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:28:49.606624 | orchestrator | 2026-01-02 03:28:49 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:28:49.606720 | orchestrator | 2026-01-02 03:28:49 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:28:52.649652 | orchestrator | 2026-01-02 03:28:52 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:28:52.652252 | orchestrator | 2026-01-02 03:28:52 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:28:52.652292 | orchestrator | 2026-01-02 03:28:52 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:28:55.695566 | orchestrator | 2026-01-02 03:28:55 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:28:55.698001 | orchestrator | 2026-01-02 03:28:55 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:28:55.698146 | orchestrator | 2026-01-02 03:28:55 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:28:58.750532 | orchestrator | 2026-01-02 03:28:58 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:28:58.752832 | orchestrator | 2026-01-02 03:28:58 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:28:58.753009 | orchestrator | 2026-01-02 03:28:58 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:29:01.799907 | orchestrator | 2026-01-02 03:29:01 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:29:01.802950 | orchestrator | 2026-01-02 03:29:01 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:29:01.803035 | orchestrator | 2026-01-02 03:29:01 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:29:04.847670 | orchestrator | 2026-01-02 03:29:04 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:29:04.849130 | orchestrator | 2026-01-02 03:29:04 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:29:04.849278 | orchestrator | 2026-01-02 03:29:04 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:29:07.900198 | orchestrator | 2026-01-02 03:29:07 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:29:07.901694 | orchestrator | 2026-01-02 03:29:07 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:29:07.901780 | orchestrator | 2026-01-02 03:29:07 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:29:10.951688 | orchestrator | 2026-01-02 03:29:10 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:29:10.952660 | orchestrator | 2026-01-02 03:29:10 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:29:10.952703 | orchestrator | 2026-01-02 03:29:10 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:29:14.006820 | orchestrator | 2026-01-02 03:29:14 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:29:14.009053 | orchestrator | 2026-01-02 03:29:14 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:29:14.009126 | orchestrator | 2026-01-02 03:29:14 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:29:17.058280 | orchestrator | 2026-01-02 03:29:17 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:29:17.060747 | orchestrator | 2026-01-02 03:29:17 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:29:17.060850 | orchestrator | 2026-01-02 03:29:17 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:29:20.111376 | orchestrator | 2026-01-02 03:29:20 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:29:20.113952 | orchestrator | 2026-01-02 03:29:20 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:29:20.114163 | orchestrator | 2026-01-02 03:29:20 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:29:23.162889 | orchestrator | 2026-01-02 03:29:23 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:29:23.164399 | orchestrator | 2026-01-02 03:29:23 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:29:23.164548 | orchestrator | 2026-01-02 03:29:23 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:29:26.209930 | orchestrator | 2026-01-02 03:29:26 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:29:26.211605 | orchestrator | 2026-01-02 03:29:26 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:29:26.211728 | orchestrator | 2026-01-02 03:29:26 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:29:29.257394 | orchestrator | 2026-01-02 03:29:29 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:29:29.258982 | orchestrator | 2026-01-02 03:29:29 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:29:29.259015 | orchestrator | 2026-01-02 03:29:29 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:29:32.300182 | orchestrator | 2026-01-02 03:29:32 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:29:32.301678 | orchestrator | 2026-01-02 03:29:32 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:29:32.301722 | orchestrator | 2026-01-02 03:29:32 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:29:35.343802 | orchestrator | 2026-01-02 03:29:35 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:29:35.345255 | orchestrator | 2026-01-02 03:29:35 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:29:35.345305 | orchestrator | 2026-01-02 03:29:35 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:29:38.390899 | orchestrator | 2026-01-02 03:29:38 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:29:38.392217 | orchestrator | 2026-01-02 03:29:38 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:29:38.392346 | orchestrator | 2026-01-02 03:29:38 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:29:41.439333 | orchestrator | 2026-01-02 03:29:41 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:29:41.441271 | orchestrator | 2026-01-02 03:29:41 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:29:41.441319 | orchestrator | 2026-01-02 03:29:41 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:29:44.486872 | orchestrator | 2026-01-02 03:29:44 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:29:44.488740 | orchestrator | 2026-01-02 03:29:44 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:29:44.488897 | orchestrator | 2026-01-02 03:29:44 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:29:47.531425 | orchestrator | 2026-01-02 03:29:47 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:29:47.533805 | orchestrator | 2026-01-02 03:29:47 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:29:47.533860 | orchestrator | 2026-01-02 03:29:47 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:29:50.579566 | orchestrator | 2026-01-02 03:29:50 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:29:50.581382 | orchestrator | 2026-01-02 03:29:50 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:29:50.581508 | orchestrator | 2026-01-02 03:29:50 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:29:53.622000 | orchestrator | 2026-01-02 03:29:53 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:29:53.624056 | orchestrator | 2026-01-02 03:29:53 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:29:53.624120 | orchestrator | 2026-01-02 03:29:53 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:29:56.672857 | orchestrator | 2026-01-02 03:29:56 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:29:56.674097 | orchestrator | 2026-01-02 03:29:56 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:29:56.674152 | orchestrator | 2026-01-02 03:29:56 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:29:59.725392 | orchestrator | 2026-01-02 03:29:59 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:29:59.726540 | orchestrator | 2026-01-02 03:29:59 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:29:59.726618 | orchestrator | 2026-01-02 03:29:59 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:30:02.771711 | orchestrator | 2026-01-02 03:30:02 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:30:02.773761 | orchestrator | 2026-01-02 03:30:02 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:30:02.773825 | orchestrator | 2026-01-02 03:30:02 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:30:05.821672 | orchestrator | 2026-01-02 03:30:05 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:30:05.823245 | orchestrator | 2026-01-02 03:30:05 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:30:05.823357 | orchestrator | 2026-01-02 03:30:05 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:30:08.866273 | orchestrator | 2026-01-02 03:30:08 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:30:08.868405 | orchestrator | 2026-01-02 03:30:08 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:30:08.868496 | orchestrator | 2026-01-02 03:30:08 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:30:11.913978 | orchestrator | 2026-01-02 03:30:11 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:30:11.915615 | orchestrator | 2026-01-02 03:30:11 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:30:11.915668 | orchestrator | 2026-01-02 03:30:11 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:30:14.956282 | orchestrator | 2026-01-02 03:30:14 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:30:14.958003 | orchestrator | 2026-01-02 03:30:14 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:30:14.958155 | orchestrator | 2026-01-02 03:30:14 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:30:17.996549 | orchestrator | 2026-01-02 03:30:17 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:30:17.998313 | orchestrator | 2026-01-02 03:30:17 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:30:17.998339 | orchestrator | 2026-01-02 03:30:17 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:30:21.048057 | orchestrator | 2026-01-02 03:30:21 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:30:21.049332 | orchestrator | 2026-01-02 03:30:21 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:30:21.049373 | orchestrator | 2026-01-02 03:30:21 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:30:24.092846 | orchestrator | 2026-01-02 03:30:24 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:30:24.094225 | orchestrator | 2026-01-02 03:30:24 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:30:24.094253 | orchestrator | 2026-01-02 03:30:24 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:30:27.143028 | orchestrator | 2026-01-02 03:30:27 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:30:27.145636 | orchestrator | 2026-01-02 03:30:27 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:30:27.145661 | orchestrator | 2026-01-02 03:30:27 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:30:30.195934 | orchestrator | 2026-01-02 03:30:30 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:30:30.196946 | orchestrator | 2026-01-02 03:30:30 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:30:30.196988 | orchestrator | 2026-01-02 03:30:30 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:30:33.251513 | orchestrator | 2026-01-02 03:30:33 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:30:33.254441 | orchestrator | 2026-01-02 03:30:33 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:30:33.255179 | orchestrator | 2026-01-02 03:30:33 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:30:36.306119 | orchestrator | 2026-01-02 03:30:36 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:30:36.307885 | orchestrator | 2026-01-02 03:30:36 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:30:36.307963 | orchestrator | 2026-01-02 03:30:36 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:30:39.358181 | orchestrator | 2026-01-02 03:30:39 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:30:39.363161 | orchestrator | 2026-01-02 03:30:39 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:30:39.363285 | orchestrator | 2026-01-02 03:30:39 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:30:42.411538 | orchestrator | 2026-01-02 03:30:42 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:30:42.413096 | orchestrator | 2026-01-02 03:30:42 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:30:42.413136 | orchestrator | 2026-01-02 03:30:42 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:30:45.463440 | orchestrator | 2026-01-02 03:30:45 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:30:45.465700 | orchestrator | 2026-01-02 03:30:45 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:30:45.465748 | orchestrator | 2026-01-02 03:30:45 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:30:48.512197 | orchestrator | 2026-01-02 03:30:48 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:30:48.513946 | orchestrator | 2026-01-02 03:30:48 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:30:48.514122 | orchestrator | 2026-01-02 03:30:48 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:30:51.554844 | orchestrator | 2026-01-02 03:30:51 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:30:51.556230 | orchestrator | 2026-01-02 03:30:51 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:30:51.556377 | orchestrator | 2026-01-02 03:30:51 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:30:54.600867 | orchestrator | 2026-01-02 03:30:54 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:30:54.603401 | orchestrator | 2026-01-02 03:30:54 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:30:54.603615 | orchestrator | 2026-01-02 03:30:54 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:30:57.653054 | orchestrator | 2026-01-02 03:30:57 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:30:57.654246 | orchestrator | 2026-01-02 03:30:57 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:30:57.654283 | orchestrator | 2026-01-02 03:30:57 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:31:00.701797 | orchestrator | 2026-01-02 03:31:00 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:31:00.704433 | orchestrator | 2026-01-02 03:31:00 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:31:00.704550 | orchestrator | 2026-01-02 03:31:00 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:31:03.749223 | orchestrator | 2026-01-02 03:31:03 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:31:03.752048 | orchestrator | 2026-01-02 03:31:03 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:31:03.752149 | orchestrator | 2026-01-02 03:31:03 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:31:06.801861 | orchestrator | 2026-01-02 03:31:06 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:31:06.804178 | orchestrator | 2026-01-02 03:31:06 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:31:06.804243 | orchestrator | 2026-01-02 03:31:06 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:31:09.853681 | orchestrator | 2026-01-02 03:31:09 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:31:09.854740 | orchestrator | 2026-01-02 03:31:09 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:31:09.855169 | orchestrator | 2026-01-02 03:31:09 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:31:12.903823 | orchestrator | 2026-01-02 03:31:12 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:31:12.905393 | orchestrator | 2026-01-02 03:31:12 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:31:12.905689 | orchestrator | 2026-01-02 03:31:12 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:31:15.952813 | orchestrator | 2026-01-02 03:31:15 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:31:15.954425 | orchestrator | 2026-01-02 03:31:15 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:31:15.954466 | orchestrator | 2026-01-02 03:31:15 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:31:19.000080 | orchestrator | 2026-01-02 03:31:18 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:31:19.001366 | orchestrator | 2026-01-02 03:31:19 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:31:19.001421 | orchestrator | 2026-01-02 03:31:19 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:31:22.051783 | orchestrator | 2026-01-02 03:31:22 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:31:22.052238 | orchestrator | 2026-01-02 03:31:22 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:31:22.052323 | orchestrator | 2026-01-02 03:31:22 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:31:25.099845 | orchestrator | 2026-01-02 03:31:25 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:31:25.101550 | orchestrator | 2026-01-02 03:31:25 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:31:25.101583 | orchestrator | 2026-01-02 03:31:25 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:31:28.148049 | orchestrator | 2026-01-02 03:31:28 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:31:28.149596 | orchestrator | 2026-01-02 03:31:28 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:31:28.149643 | orchestrator | 2026-01-02 03:31:28 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:31:31.195789 | orchestrator | 2026-01-02 03:31:31 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:31:31.197546 | orchestrator | 2026-01-02 03:31:31 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:31:31.197607 | orchestrator | 2026-01-02 03:31:31 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:31:34.243266 | orchestrator | 2026-01-02 03:31:34 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:31:34.246334 | orchestrator | 2026-01-02 03:31:34 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:31:34.246412 | orchestrator | 2026-01-02 03:31:34 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:31:37.290855 | orchestrator | 2026-01-02 03:31:37 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:31:37.292594 | orchestrator | 2026-01-02 03:31:37 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:31:37.292639 | orchestrator | 2026-01-02 03:31:37 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:31:40.338349 | orchestrator | 2026-01-02 03:31:40 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:31:40.340407 | orchestrator | 2026-01-02 03:31:40 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:31:40.340464 | orchestrator | 2026-01-02 03:31:40 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:31:43.386116 | orchestrator | 2026-01-02 03:31:43 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:31:43.388521 | orchestrator | 2026-01-02 03:31:43 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:31:43.388552 | orchestrator | 2026-01-02 03:31:43 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:31:46.431871 | orchestrator | 2026-01-02 03:31:46 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:31:46.433291 | orchestrator | 2026-01-02 03:31:46 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:31:46.433343 | orchestrator | 2026-01-02 03:31:46 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:31:49.482756 | orchestrator | 2026-01-02 03:31:49 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:31:49.483382 | orchestrator | 2026-01-02 03:31:49 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:31:49.483414 | orchestrator | 2026-01-02 03:31:49 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:31:52.525830 | orchestrator | 2026-01-02 03:31:52 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:31:52.527822 | orchestrator | 2026-01-02 03:31:52 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:31:52.527868 | orchestrator | 2026-01-02 03:31:52 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:31:55.577363 | orchestrator | 2026-01-02 03:31:55 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:31:55.578289 | orchestrator | 2026-01-02 03:31:55 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:31:55.578323 | orchestrator | 2026-01-02 03:31:55 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:31:58.625410 | orchestrator | 2026-01-02 03:31:58 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:31:58.627443 | orchestrator | 2026-01-02 03:31:58 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:31:58.627606 | orchestrator | 2026-01-02 03:31:58 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:32:01.674753 | orchestrator | 2026-01-02 03:32:01 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:32:01.676430 | orchestrator | 2026-01-02 03:32:01 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:32:01.676466 | orchestrator | 2026-01-02 03:32:01 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:32:04.724920 | orchestrator | 2026-01-02 03:32:04 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:32:04.726497 | orchestrator | 2026-01-02 03:32:04 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:32:04.726630 | orchestrator | 2026-01-02 03:32:04 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:32:07.773043 | orchestrator | 2026-01-02 03:32:07 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:32:07.775062 | orchestrator | 2026-01-02 03:32:07 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:32:07.775114 | orchestrator | 2026-01-02 03:32:07 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:32:10.823483 | orchestrator | 2026-01-02 03:32:10 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:32:10.824875 | orchestrator | 2026-01-02 03:32:10 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:32:10.825210 | orchestrator | 2026-01-02 03:32:10 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:32:13.867039 | orchestrator | 2026-01-02 03:32:13 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:32:13.869045 | orchestrator | 2026-01-02 03:32:13 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:32:13.869118 | orchestrator | 2026-01-02 03:32:13 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:32:16.915090 | orchestrator | 2026-01-02 03:32:16 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:32:16.916242 | orchestrator | 2026-01-02 03:32:16 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:32:16.916329 | orchestrator | 2026-01-02 03:32:16 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:32:19.960740 | orchestrator | 2026-01-02 03:32:19 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:32:19.963262 | orchestrator | 2026-01-02 03:32:19 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:32:19.963323 | orchestrator | 2026-01-02 03:32:19 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:32:23.011797 | orchestrator | 2026-01-02 03:32:23 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:32:23.012150 | orchestrator | 2026-01-02 03:32:23 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:32:23.012182 | orchestrator | 2026-01-02 03:32:23 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:32:26.059937 | orchestrator | 2026-01-02 03:32:26 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:32:26.061690 | orchestrator | 2026-01-02 03:32:26 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:32:26.061839 | orchestrator | 2026-01-02 03:32:26 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:32:29.110758 | orchestrator | 2026-01-02 03:32:29 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:32:29.111907 | orchestrator | 2026-01-02 03:32:29 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:32:29.111938 | orchestrator | 2026-01-02 03:32:29 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:32:32.154186 | orchestrator | 2026-01-02 03:32:32 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:32:32.154714 | orchestrator | 2026-01-02 03:32:32 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:32:32.154881 | orchestrator | 2026-01-02 03:32:32 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:32:35.199312 | orchestrator | 2026-01-02 03:32:35 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:32:35.200510 | orchestrator | 2026-01-02 03:32:35 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:32:35.200608 | orchestrator | 2026-01-02 03:32:35 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:32:38.242613 | orchestrator | 2026-01-02 03:32:38 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:32:38.245237 | orchestrator | 2026-01-02 03:32:38 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:32:38.245366 | orchestrator | 2026-01-02 03:32:38 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:32:41.287902 | orchestrator | 2026-01-02 03:32:41 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:32:41.289752 | orchestrator | 2026-01-02 03:32:41 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:32:41.289826 | orchestrator | 2026-01-02 03:32:41 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:32:44.340152 | orchestrator | 2026-01-02 03:32:44 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:32:44.341592 | orchestrator | 2026-01-02 03:32:44 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:32:44.341692 | orchestrator | 2026-01-02 03:32:44 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:32:47.385326 | orchestrator | 2026-01-02 03:32:47 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:32:47.388963 | orchestrator | 2026-01-02 03:32:47 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:32:47.389024 | orchestrator | 2026-01-02 03:32:47 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:32:50.433485 | orchestrator | 2026-01-02 03:32:50 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:32:50.435158 | orchestrator | 2026-01-02 03:32:50 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:32:50.435190 | orchestrator | 2026-01-02 03:32:50 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:32:53.481949 | orchestrator | 2026-01-02 03:32:53 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:32:53.483374 | orchestrator | 2026-01-02 03:32:53 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:32:53.483427 | orchestrator | 2026-01-02 03:32:53 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:32:56.529102 | orchestrator | 2026-01-02 03:32:56 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:32:56.530744 | orchestrator | 2026-01-02 03:32:56 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:32:56.530845 | orchestrator | 2026-01-02 03:32:56 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:32:59.580789 | orchestrator | 2026-01-02 03:32:59 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:32:59.582410 | orchestrator | 2026-01-02 03:32:59 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:32:59.582444 | orchestrator | 2026-01-02 03:32:59 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:33:02.627149 | orchestrator | 2026-01-02 03:33:02 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:33:02.628454 | orchestrator | 2026-01-02 03:33:02 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:33:02.628675 | orchestrator | 2026-01-02 03:33:02 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:33:05.671860 | orchestrator | 2026-01-02 03:33:05 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:33:05.673976 | orchestrator | 2026-01-02 03:33:05 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:33:05.674083 | orchestrator | 2026-01-02 03:33:05 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:33:08.720007 | orchestrator | 2026-01-02 03:33:08 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:33:08.722286 | orchestrator | 2026-01-02 03:33:08 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:33:08.722360 | orchestrator | 2026-01-02 03:33:08 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:33:11.766253 | orchestrator | 2026-01-02 03:33:11 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:33:11.767813 | orchestrator | 2026-01-02 03:33:11 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:33:11.767885 | orchestrator | 2026-01-02 03:33:11 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:33:14.816901 | orchestrator | 2026-01-02 03:33:14 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:33:14.818117 | orchestrator | 2026-01-02 03:33:14 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:33:14.818146 | orchestrator | 2026-01-02 03:33:14 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:33:17.859462 | orchestrator | 2026-01-02 03:33:17 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:33:17.859635 | orchestrator | 2026-01-02 03:33:17 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:33:17.859651 | orchestrator | 2026-01-02 03:33:17 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:33:20.906945 | orchestrator | 2026-01-02 03:33:20 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:33:20.908639 | orchestrator | 2026-01-02 03:33:20 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:33:20.908689 | orchestrator | 2026-01-02 03:33:20 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:33:23.955231 | orchestrator | 2026-01-02 03:33:23 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:33:23.958075 | orchestrator | 2026-01-02 03:33:23 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:33:23.958114 | orchestrator | 2026-01-02 03:33:23 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:33:27.009091 | orchestrator | 2026-01-02 03:33:27 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:33:27.010760 | orchestrator | 2026-01-02 03:33:27 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:33:27.011157 | orchestrator | 2026-01-02 03:33:27 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:33:30.061413 | orchestrator | 2026-01-02 03:33:30 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:33:30.064262 | orchestrator | 2026-01-02 03:33:30 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:33:30.064327 | orchestrator | 2026-01-02 03:33:30 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:33:33.101131 | orchestrator | 2026-01-02 03:33:33 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:33:33.102447 | orchestrator | 2026-01-02 03:33:33 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:33:33.102469 | orchestrator | 2026-01-02 03:33:33 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:33:36.140007 | orchestrator | 2026-01-02 03:33:36 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:33:36.140608 | orchestrator | 2026-01-02 03:33:36 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:33:36.140726 | orchestrator | 2026-01-02 03:33:36 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:33:39.186375 | orchestrator | 2026-01-02 03:33:39 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:33:39.187853 | orchestrator | 2026-01-02 03:33:39 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:33:39.187883 | orchestrator | 2026-01-02 03:33:39 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:33:42.235542 | orchestrator | 2026-01-02 03:33:42 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:33:42.237409 | orchestrator | 2026-01-02 03:33:42 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:33:42.237478 | orchestrator | 2026-01-02 03:33:42 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:33:45.279835 | orchestrator | 2026-01-02 03:33:45 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:33:45.282538 | orchestrator | 2026-01-02 03:33:45 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:33:45.282694 | orchestrator | 2026-01-02 03:33:45 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:33:48.318996 | orchestrator | 2026-01-02 03:33:48 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:33:48.320604 | orchestrator | 2026-01-02 03:33:48 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:33:48.320663 | orchestrator | 2026-01-02 03:33:48 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:33:51.365721 | orchestrator | 2026-01-02 03:33:51 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:33:51.367792 | orchestrator | 2026-01-02 03:33:51 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:33:51.368037 | orchestrator | 2026-01-02 03:33:51 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:33:54.414723 | orchestrator | 2026-01-02 03:33:54 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:33:54.416065 | orchestrator | 2026-01-02 03:33:54 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:33:54.416136 | orchestrator | 2026-01-02 03:33:54 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:33:57.465275 | orchestrator | 2026-01-02 03:33:57 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:33:57.465815 | orchestrator | 2026-01-02 03:33:57 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:33:57.465850 | orchestrator | 2026-01-02 03:33:57 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:34:00.517264 | orchestrator | 2026-01-02 03:34:00 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:34:00.517435 | orchestrator | 2026-01-02 03:34:00 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:34:00.517570 | orchestrator | 2026-01-02 03:34:00 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:34:03.565015 | orchestrator | 2026-01-02 03:34:03 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:34:03.566497 | orchestrator | 2026-01-02 03:34:03 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:34:03.566515 | orchestrator | 2026-01-02 03:34:03 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:34:06.617104 | orchestrator | 2026-01-02 03:34:06 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:34:06.618554 | orchestrator | 2026-01-02 03:34:06 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:34:06.618574 | orchestrator | 2026-01-02 03:34:06 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:34:09.662781 | orchestrator | 2026-01-02 03:34:09 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:34:09.664657 | orchestrator | 2026-01-02 03:34:09 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:34:09.664773 | orchestrator | 2026-01-02 03:34:09 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:34:12.710687 | orchestrator | 2026-01-02 03:34:12 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:34:12.712926 | orchestrator | 2026-01-02 03:34:12 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:34:12.712961 | orchestrator | 2026-01-02 03:34:12 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:34:15.762518 | orchestrator | 2026-01-02 03:34:15 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:34:15.763862 | orchestrator | 2026-01-02 03:34:15 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:34:15.763945 | orchestrator | 2026-01-02 03:34:15 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:34:18.812122 | orchestrator | 2026-01-02 03:34:18 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:34:18.814320 | orchestrator | 2026-01-02 03:34:18 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:34:18.814379 | orchestrator | 2026-01-02 03:34:18 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:34:21.862005 | orchestrator | 2026-01-02 03:34:21 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:34:21.863564 | orchestrator | 2026-01-02 03:34:21 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:34:21.863804 | orchestrator | 2026-01-02 03:34:21 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:34:24.908931 | orchestrator | 2026-01-02 03:34:24 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:34:24.910855 | orchestrator | 2026-01-02 03:34:24 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:34:24.910905 | orchestrator | 2026-01-02 03:34:24 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:34:27.957195 | orchestrator | 2026-01-02 03:34:27 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:34:27.959872 | orchestrator | 2026-01-02 03:34:27 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:34:27.960373 | orchestrator | 2026-01-02 03:34:27 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:34:31.010919 | orchestrator | 2026-01-02 03:34:31 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:34:31.011670 | orchestrator | 2026-01-02 03:34:31 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:34:31.011704 | orchestrator | 2026-01-02 03:34:31 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:34:34.069481 | orchestrator | 2026-01-02 03:34:34 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:34:34.070991 | orchestrator | 2026-01-02 03:34:34 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:34:34.071199 | orchestrator | 2026-01-02 03:34:34 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:34:37.108961 | orchestrator | 2026-01-02 03:34:37 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:34:37.109208 | orchestrator | 2026-01-02 03:34:37 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:34:37.109234 | orchestrator | 2026-01-02 03:34:37 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:34:40.152734 | orchestrator | 2026-01-02 03:34:40 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:34:40.154344 | orchestrator | 2026-01-02 03:34:40 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:34:40.154457 | orchestrator | 2026-01-02 03:34:40 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:34:43.197325 | orchestrator | 2026-01-02 03:34:43 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:34:43.199263 | orchestrator | 2026-01-02 03:34:43 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:34:43.199509 | orchestrator | 2026-01-02 03:34:43 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:34:46.247718 | orchestrator | 2026-01-02 03:34:46 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:34:46.249208 | orchestrator | 2026-01-02 03:34:46 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:34:46.249322 | orchestrator | 2026-01-02 03:34:46 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:34:49.295667 | orchestrator | 2026-01-02 03:34:49 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:34:49.297578 | orchestrator | 2026-01-02 03:34:49 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:34:49.297639 | orchestrator | 2026-01-02 03:34:49 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:34:52.346003 | orchestrator | 2026-01-02 03:34:52 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:34:52.347725 | orchestrator | 2026-01-02 03:34:52 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:34:52.347766 | orchestrator | 2026-01-02 03:34:52 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:34:55.399876 | orchestrator | 2026-01-02 03:34:55 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:34:55.401109 | orchestrator | 2026-01-02 03:34:55 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:34:55.401227 | orchestrator | 2026-01-02 03:34:55 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:34:58.446513 | orchestrator | 2026-01-02 03:34:58 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:34:58.447705 | orchestrator | 2026-01-02 03:34:58 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:34:58.447838 | orchestrator | 2026-01-02 03:34:58 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:35:01.491518 | orchestrator | 2026-01-02 03:35:01 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:35:01.494372 | orchestrator | 2026-01-02 03:35:01 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:35:01.494795 | orchestrator | 2026-01-02 03:35:01 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:35:04.537463 | orchestrator | 2026-01-02 03:35:04 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:35:04.539246 | orchestrator | 2026-01-02 03:35:04 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:35:04.539304 | orchestrator | 2026-01-02 03:35:04 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:35:07.582185 | orchestrator | 2026-01-02 03:35:07 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:35:07.584207 | orchestrator | 2026-01-02 03:35:07 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:35:07.584293 | orchestrator | 2026-01-02 03:35:07 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:35:10.628181 | orchestrator | 2026-01-02 03:35:10 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:35:10.629412 | orchestrator | 2026-01-02 03:35:10 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:35:10.629508 | orchestrator | 2026-01-02 03:35:10 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:35:13.674379 | orchestrator | 2026-01-02 03:35:13 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:35:13.675759 | orchestrator | 2026-01-02 03:35:13 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:35:13.675879 | orchestrator | 2026-01-02 03:35:13 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:35:16.717848 | orchestrator | 2026-01-02 03:35:16 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:35:16.719379 | orchestrator | 2026-01-02 03:35:16 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:35:16.719470 | orchestrator | 2026-01-02 03:35:16 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:35:19.762757 | orchestrator | 2026-01-02 03:35:19 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:35:19.764199 | orchestrator | 2026-01-02 03:35:19 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:35:19.764246 | orchestrator | 2026-01-02 03:35:19 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:35:22.813324 | orchestrator | 2026-01-02 03:35:22 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:35:22.815677 | orchestrator | 2026-01-02 03:35:22 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:35:22.815728 | orchestrator | 2026-01-02 03:35:22 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:35:25.867065 | orchestrator | 2026-01-02 03:35:25 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:35:25.869846 | orchestrator | 2026-01-02 03:35:25 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:35:25.869901 | orchestrator | 2026-01-02 03:35:25 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:35:28.915604 | orchestrator | 2026-01-02 03:35:28 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:35:28.916672 | orchestrator | 2026-01-02 03:35:28 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:35:28.916694 | orchestrator | 2026-01-02 03:35:28 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:35:31.973695 | orchestrator | 2026-01-02 03:35:31 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:35:31.974605 | orchestrator | 2026-01-02 03:35:31 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:35:31.974708 | orchestrator | 2026-01-02 03:35:31 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:35:35.021779 | orchestrator | 2026-01-02 03:35:35 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:35:35.024146 | orchestrator | 2026-01-02 03:35:35 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:35:35.024181 | orchestrator | 2026-01-02 03:35:35 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:35:38.071340 | orchestrator | 2026-01-02 03:35:38 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:35:38.072930 | orchestrator | 2026-01-02 03:35:38 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:35:38.072985 | orchestrator | 2026-01-02 03:35:38 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:35:41.125478 | orchestrator | 2026-01-02 03:35:41 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:35:41.127356 | orchestrator | 2026-01-02 03:35:41 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:35:41.127402 | orchestrator | 2026-01-02 03:35:41 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:35:44.172285 | orchestrator | 2026-01-02 03:35:44 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:35:44.173846 | orchestrator | 2026-01-02 03:35:44 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:35:44.173904 | orchestrator | 2026-01-02 03:35:44 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:35:47.215718 | orchestrator | 2026-01-02 03:35:47 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:35:47.218076 | orchestrator | 2026-01-02 03:35:47 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:35:47.218105 | orchestrator | 2026-01-02 03:35:47 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:35:50.262368 | orchestrator | 2026-01-02 03:35:50 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:35:50.263733 | orchestrator | 2026-01-02 03:35:50 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:35:50.264206 | orchestrator | 2026-01-02 03:35:50 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:35:53.306265 | orchestrator | 2026-01-02 03:35:53 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:35:53.307710 | orchestrator | 2026-01-02 03:35:53 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:35:53.307746 | orchestrator | 2026-01-02 03:35:53 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:35:56.353538 | orchestrator | 2026-01-02 03:35:56 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:35:56.354102 | orchestrator | 2026-01-02 03:35:56 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:35:56.354131 | orchestrator | 2026-01-02 03:35:56 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:35:59.399840 | orchestrator | 2026-01-02 03:35:59 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:35:59.400534 | orchestrator | 2026-01-02 03:35:59 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:35:59.400569 | orchestrator | 2026-01-02 03:35:59 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:36:02.451081 | orchestrator | 2026-01-02 03:36:02 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:36:02.452767 | orchestrator | 2026-01-02 03:36:02 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:36:02.452814 | orchestrator | 2026-01-02 03:36:02 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:36:05.511361 | orchestrator | 2026-01-02 03:36:05 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:36:05.512467 | orchestrator | 2026-01-02 03:36:05 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:36:05.512499 | orchestrator | 2026-01-02 03:36:05 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:36:08.554879 | orchestrator | 2026-01-02 03:36:08 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:36:08.556351 | orchestrator | 2026-01-02 03:36:08 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:36:08.556399 | orchestrator | 2026-01-02 03:36:08 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:36:11.602533 | orchestrator | 2026-01-02 03:36:11 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:36:11.605475 | orchestrator | 2026-01-02 03:36:11 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:36:11.605531 | orchestrator | 2026-01-02 03:36:11 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:36:14.650904 | orchestrator | 2026-01-02 03:36:14 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:36:14.652263 | orchestrator | 2026-01-02 03:36:14 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:36:14.652301 | orchestrator | 2026-01-02 03:36:14 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:36:17.699125 | orchestrator | 2026-01-02 03:36:17 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:36:17.700897 | orchestrator | 2026-01-02 03:36:17 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:36:17.700945 | orchestrator | 2026-01-02 03:36:17 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:36:20.743299 | orchestrator | 2026-01-02 03:36:20 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:36:20.745010 | orchestrator | 2026-01-02 03:36:20 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:36:20.745058 | orchestrator | 2026-01-02 03:36:20 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:36:23.791550 | orchestrator | 2026-01-02 03:36:23 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:36:23.793059 | orchestrator | 2026-01-02 03:36:23 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:36:23.793175 | orchestrator | 2026-01-02 03:36:23 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:36:26.839026 | orchestrator | 2026-01-02 03:36:26 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:36:26.840996 | orchestrator | 2026-01-02 03:36:26 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:36:26.841048 | orchestrator | 2026-01-02 03:36:26 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:36:29.887811 | orchestrator | 2026-01-02 03:36:29 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:36:29.890244 | orchestrator | 2026-01-02 03:36:29 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:36:29.890332 | orchestrator | 2026-01-02 03:36:29 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:36:32.942929 | orchestrator | 2026-01-02 03:36:32 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:36:32.944597 | orchestrator | 2026-01-02 03:36:32 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:36:32.944692 | orchestrator | 2026-01-02 03:36:32 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:36:35.987172 | orchestrator | 2026-01-02 03:36:35 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:36:35.989033 | orchestrator | 2026-01-02 03:36:35 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:36:35.989102 | orchestrator | 2026-01-02 03:36:35 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:36:39.033218 | orchestrator | 2026-01-02 03:36:39 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:36:39.034947 | orchestrator | 2026-01-02 03:36:39 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:36:39.035073 | orchestrator | 2026-01-02 03:36:39 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:36:42.085035 | orchestrator | 2026-01-02 03:36:42 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:36:42.091700 | orchestrator | 2026-01-02 03:36:42 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:36:42.091871 | orchestrator | 2026-01-02 03:36:42 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:36:45.135445 | orchestrator | 2026-01-02 03:36:45 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:36:45.137358 | orchestrator | 2026-01-02 03:36:45 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:36:45.137419 | orchestrator | 2026-01-02 03:36:45 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:36:48.183040 | orchestrator | 2026-01-02 03:36:48 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:36:48.184380 | orchestrator | 2026-01-02 03:36:48 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:36:48.184608 | orchestrator | 2026-01-02 03:36:48 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:36:51.233177 | orchestrator | 2026-01-02 03:36:51 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:36:51.235787 | orchestrator | 2026-01-02 03:36:51 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:36:51.236051 | orchestrator | 2026-01-02 03:36:51 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:36:54.283149 | orchestrator | 2026-01-02 03:36:54 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:36:54.285737 | orchestrator | 2026-01-02 03:36:54 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:36:54.285772 | orchestrator | 2026-01-02 03:36:54 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:36:57.333553 | orchestrator | 2026-01-02 03:36:57 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:36:57.335449 | orchestrator | 2026-01-02 03:36:57 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:36:57.335547 | orchestrator | 2026-01-02 03:36:57 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:37:00.374214 | orchestrator | 2026-01-02 03:37:00 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:37:00.376400 | orchestrator | 2026-01-02 03:37:00 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:37:00.376453 | orchestrator | 2026-01-02 03:37:00 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:37:03.422177 | orchestrator | 2026-01-02 03:37:03 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:37:03.423310 | orchestrator | 2026-01-02 03:37:03 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:37:03.423333 | orchestrator | 2026-01-02 03:37:03 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:37:06.471577 | orchestrator | 2026-01-02 03:37:06 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:37:06.473286 | orchestrator | 2026-01-02 03:37:06 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:37:06.473336 | orchestrator | 2026-01-02 03:37:06 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:37:09.516570 | orchestrator | 2026-01-02 03:37:09 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:37:09.519867 | orchestrator | 2026-01-02 03:37:09 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:37:09.519899 | orchestrator | 2026-01-02 03:37:09 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:37:12.563436 | orchestrator | 2026-01-02 03:37:12 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:37:12.564988 | orchestrator | 2026-01-02 03:37:12 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:37:12.565068 | orchestrator | 2026-01-02 03:37:12 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:37:15.612752 | orchestrator | 2026-01-02 03:37:15 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:37:15.615282 | orchestrator | 2026-01-02 03:37:15 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:37:15.615318 | orchestrator | 2026-01-02 03:37:15 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:37:18.663284 | orchestrator | 2026-01-02 03:37:18 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:37:18.664212 | orchestrator | 2026-01-02 03:37:18 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:37:18.664403 | orchestrator | 2026-01-02 03:37:18 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:37:21.712756 | orchestrator | 2026-01-02 03:37:21 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:37:21.714182 | orchestrator | 2026-01-02 03:37:21 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:37:21.714263 | orchestrator | 2026-01-02 03:37:21 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:37:24.760314 | orchestrator | 2026-01-02 03:37:24 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:37:24.761882 | orchestrator | 2026-01-02 03:37:24 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:37:24.762076 | orchestrator | 2026-01-02 03:37:24 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:37:27.807360 | orchestrator | 2026-01-02 03:37:27 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:37:27.810170 | orchestrator | 2026-01-02 03:37:27 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:37:27.810258 | orchestrator | 2026-01-02 03:37:27 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:37:30.855041 | orchestrator | 2026-01-02 03:37:30 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:37:30.856553 | orchestrator | 2026-01-02 03:37:30 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:37:30.856762 | orchestrator | 2026-01-02 03:37:30 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:37:33.908842 | orchestrator | 2026-01-02 03:37:33 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:37:33.910856 | orchestrator | 2026-01-02 03:37:33 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:37:33.910902 | orchestrator | 2026-01-02 03:37:33 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:37:36.953859 | orchestrator | 2026-01-02 03:37:36 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:37:36.955231 | orchestrator | 2026-01-02 03:37:36 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:37:36.955681 | orchestrator | 2026-01-02 03:37:36 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:37:40.006474 | orchestrator | 2026-01-02 03:37:40 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:37:40.008469 | orchestrator | 2026-01-02 03:37:40 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:37:40.008559 | orchestrator | 2026-01-02 03:37:40 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:37:43.058429 | orchestrator | 2026-01-02 03:37:43 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:37:43.059504 | orchestrator | 2026-01-02 03:37:43 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:37:43.059570 | orchestrator | 2026-01-02 03:37:43 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:37:46.105133 | orchestrator | 2026-01-02 03:37:46 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:37:46.107555 | orchestrator | 2026-01-02 03:37:46 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:37:46.107904 | orchestrator | 2026-01-02 03:37:46 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:37:49.155554 | orchestrator | 2026-01-02 03:37:49 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:37:49.156744 | orchestrator | 2026-01-02 03:37:49 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:37:49.156797 | orchestrator | 2026-01-02 03:37:49 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:37:52.206390 | orchestrator | 2026-01-02 03:37:52 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:37:52.208495 | orchestrator | 2026-01-02 03:37:52 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:37:52.208641 | orchestrator | 2026-01-02 03:37:52 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:37:55.248910 | orchestrator | 2026-01-02 03:37:55 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:37:55.249765 | orchestrator | 2026-01-02 03:37:55 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:37:55.249803 | orchestrator | 2026-01-02 03:37:55 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:37:58.295842 | orchestrator | 2026-01-02 03:37:58 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:37:58.297428 | orchestrator | 2026-01-02 03:37:58 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:37:58.297457 | orchestrator | 2026-01-02 03:37:58 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:38:01.351314 | orchestrator | 2026-01-02 03:38:01 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:38:01.352995 | orchestrator | 2026-01-02 03:38:01 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:38:01.353044 | orchestrator | 2026-01-02 03:38:01 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:38:04.394801 | orchestrator | 2026-01-02 03:38:04 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:38:04.396712 | orchestrator | 2026-01-02 03:38:04 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:38:04.396790 | orchestrator | 2026-01-02 03:38:04 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:38:07.445237 | orchestrator | 2026-01-02 03:38:07 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:38:07.447002 | orchestrator | 2026-01-02 03:38:07 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:38:07.447342 | orchestrator | 2026-01-02 03:38:07 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:38:10.494299 | orchestrator | 2026-01-02 03:38:10 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:38:10.496782 | orchestrator | 2026-01-02 03:38:10 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:38:10.496839 | orchestrator | 2026-01-02 03:38:10 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:38:13.544725 | orchestrator | 2026-01-02 03:38:13 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:38:13.544955 | orchestrator | 2026-01-02 03:38:13 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:38:13.545239 | orchestrator | 2026-01-02 03:38:13 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:38:16.588110 | orchestrator | 2026-01-02 03:38:16 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:38:16.589495 | orchestrator | 2026-01-02 03:38:16 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:38:16.589540 | orchestrator | 2026-01-02 03:38:16 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:38:19.637792 | orchestrator | 2026-01-02 03:38:19 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:38:19.639060 | orchestrator | 2026-01-02 03:38:19 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:38:19.639126 | orchestrator | 2026-01-02 03:38:19 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:38:22.684083 | orchestrator | 2026-01-02 03:38:22 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:38:22.685995 | orchestrator | 2026-01-02 03:38:22 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:38:22.686130 | orchestrator | 2026-01-02 03:38:22 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:38:25.734090 | orchestrator | 2026-01-02 03:38:25 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:38:25.735935 | orchestrator | 2026-01-02 03:38:25 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:38:25.735982 | orchestrator | 2026-01-02 03:38:25 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:38:28.778530 | orchestrator | 2026-01-02 03:38:28 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:38:28.780648 | orchestrator | 2026-01-02 03:38:28 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:38:28.780857 | orchestrator | 2026-01-02 03:38:28 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:38:31.823400 | orchestrator | 2026-01-02 03:38:31 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:38:31.824985 | orchestrator | 2026-01-02 03:38:31 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:38:31.825020 | orchestrator | 2026-01-02 03:38:31 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:38:34.866441 | orchestrator | 2026-01-02 03:38:34 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:38:34.869575 | orchestrator | 2026-01-02 03:38:34 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:38:34.869900 | orchestrator | 2026-01-02 03:38:34 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:38:37.913469 | orchestrator | 2026-01-02 03:38:37 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:38:37.915228 | orchestrator | 2026-01-02 03:38:37 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:38:37.915423 | orchestrator | 2026-01-02 03:38:37 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:38:40.961702 | orchestrator | 2026-01-02 03:38:40 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:38:40.963434 | orchestrator | 2026-01-02 03:38:40 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:38:40.963578 | orchestrator | 2026-01-02 03:38:40 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:38:44.011329 | orchestrator | 2026-01-02 03:38:44 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:38:44.012432 | orchestrator | 2026-01-02 03:38:44 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:38:44.012562 | orchestrator | 2026-01-02 03:38:44 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:38:47.056479 | orchestrator | 2026-01-02 03:38:47 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:38:47.062162 | orchestrator | 2026-01-02 03:38:47 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:38:47.062222 | orchestrator | 2026-01-02 03:38:47 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:38:50.108772 | orchestrator | 2026-01-02 03:38:50 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:38:50.111377 | orchestrator | 2026-01-02 03:38:50 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:38:50.111413 | orchestrator | 2026-01-02 03:38:50 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:38:53.157801 | orchestrator | 2026-01-02 03:38:53 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:38:53.159082 | orchestrator | 2026-01-02 03:38:53 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:38:53.159152 | orchestrator | 2026-01-02 03:38:53 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:38:56.209085 | orchestrator | 2026-01-02 03:38:56 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:38:56.212507 | orchestrator | 2026-01-02 03:38:56 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:38:56.212555 | orchestrator | 2026-01-02 03:38:56 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:38:59.258370 | orchestrator | 2026-01-02 03:38:59 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:38:59.259591 | orchestrator | 2026-01-02 03:38:59 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:38:59.259973 | orchestrator | 2026-01-02 03:38:59 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:39:02.306103 | orchestrator | 2026-01-02 03:39:02 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:39:02.308506 | orchestrator | 2026-01-02 03:39:02 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:39:02.308739 | orchestrator | 2026-01-02 03:39:02 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:39:05.360043 | orchestrator | 2026-01-02 03:39:05 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:39:05.361709 | orchestrator | 2026-01-02 03:39:05 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:39:05.361796 | orchestrator | 2026-01-02 03:39:05 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:39:08.408649 | orchestrator | 2026-01-02 03:39:08 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:39:08.409589 | orchestrator | 2026-01-02 03:39:08 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:39:08.409763 | orchestrator | 2026-01-02 03:39:08 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:39:11.453809 | orchestrator | 2026-01-02 03:39:11 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:39:11.455631 | orchestrator | 2026-01-02 03:39:11 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:39:11.455804 | orchestrator | 2026-01-02 03:39:11 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:39:14.500475 | orchestrator | 2026-01-02 03:39:14 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:39:14.502217 | orchestrator | 2026-01-02 03:39:14 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:39:14.502311 | orchestrator | 2026-01-02 03:39:14 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:39:17.543920 | orchestrator | 2026-01-02 03:39:17 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:39:17.545421 | orchestrator | 2026-01-02 03:39:17 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:39:17.545545 | orchestrator | 2026-01-02 03:39:17 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:39:20.589458 | orchestrator | 2026-01-02 03:39:20 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:39:20.590878 | orchestrator | 2026-01-02 03:39:20 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:39:20.590929 | orchestrator | 2026-01-02 03:39:20 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:39:23.641199 | orchestrator | 2026-01-02 03:39:23 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:39:23.642326 | orchestrator | 2026-01-02 03:39:23 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:39:23.642367 | orchestrator | 2026-01-02 03:39:23 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:39:26.682944 | orchestrator | 2026-01-02 03:39:26 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:39:26.684959 | orchestrator | 2026-01-02 03:39:26 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:39:26.684999 | orchestrator | 2026-01-02 03:39:26 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:39:29.728608 | orchestrator | 2026-01-02 03:39:29 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:39:29.730072 | orchestrator | 2026-01-02 03:39:29 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:39:29.730123 | orchestrator | 2026-01-02 03:39:29 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:39:32.775607 | orchestrator | 2026-01-02 03:39:32 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:39:32.777538 | orchestrator | 2026-01-02 03:39:32 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:39:32.777577 | orchestrator | 2026-01-02 03:39:32 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:39:35.825438 | orchestrator | 2026-01-02 03:39:35 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:39:35.825543 | orchestrator | 2026-01-02 03:39:35 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:39:35.825558 | orchestrator | 2026-01-02 03:39:35 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:39:38.871333 | orchestrator | 2026-01-02 03:39:38 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:39:38.874204 | orchestrator | 2026-01-02 03:39:38 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:39:38.874329 | orchestrator | 2026-01-02 03:39:38 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:39:41.922960 | orchestrator | 2026-01-02 03:39:41 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:39:41.925204 | orchestrator | 2026-01-02 03:39:41 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:39:41.925338 | orchestrator | 2026-01-02 03:39:41 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:39:44.971756 | orchestrator | 2026-01-02 03:39:44 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:39:44.972480 | orchestrator | 2026-01-02 03:39:44 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:39:44.972515 | orchestrator | 2026-01-02 03:39:44 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:39:48.025845 | orchestrator | 2026-01-02 03:39:48 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:39:48.027488 | orchestrator | 2026-01-02 03:39:48 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:39:48.027560 | orchestrator | 2026-01-02 03:39:48 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:39:51.072165 | orchestrator | 2026-01-02 03:39:51 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:39:51.074069 | orchestrator | 2026-01-02 03:39:51 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:39:51.074201 | orchestrator | 2026-01-02 03:39:51 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:39:54.116812 | orchestrator | 2026-01-02 03:39:54 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:39:54.117882 | orchestrator | 2026-01-02 03:39:54 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:39:54.117957 | orchestrator | 2026-01-02 03:39:54 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:39:57.160853 | orchestrator | 2026-01-02 03:39:57 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:39:57.162298 | orchestrator | 2026-01-02 03:39:57 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:39:57.162562 | orchestrator | 2026-01-02 03:39:57 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:40:00.200742 | orchestrator | 2026-01-02 03:40:00 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:40:00.202179 | orchestrator | 2026-01-02 03:40:00 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:40:00.202218 | orchestrator | 2026-01-02 03:40:00 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:40:03.249974 | orchestrator | 2026-01-02 03:40:03 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:40:03.251482 | orchestrator | 2026-01-02 03:40:03 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:40:03.251648 | orchestrator | 2026-01-02 03:40:03 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:40:06.292272 | orchestrator | 2026-01-02 03:40:06 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:40:06.293343 | orchestrator | 2026-01-02 03:40:06 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:40:06.293482 | orchestrator | 2026-01-02 03:40:06 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:40:09.339162 | orchestrator | 2026-01-02 03:40:09 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:40:09.341290 | orchestrator | 2026-01-02 03:40:09 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:40:09.341592 | orchestrator | 2026-01-02 03:40:09 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:40:12.390005 | orchestrator | 2026-01-02 03:40:12 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:40:12.392080 | orchestrator | 2026-01-02 03:40:12 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:40:12.392152 | orchestrator | 2026-01-02 03:40:12 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:40:15.436122 | orchestrator | 2026-01-02 03:40:15 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:40:15.438070 | orchestrator | 2026-01-02 03:40:15 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:40:15.438121 | orchestrator | 2026-01-02 03:40:15 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:40:18.487952 | orchestrator | 2026-01-02 03:40:18 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:40:18.490505 | orchestrator | 2026-01-02 03:40:18 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:40:18.490541 | orchestrator | 2026-01-02 03:40:18 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:40:21.529867 | orchestrator | 2026-01-02 03:40:21 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:40:21.531619 | orchestrator | 2026-01-02 03:40:21 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:40:21.531710 | orchestrator | 2026-01-02 03:40:21 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:40:24.580251 | orchestrator | 2026-01-02 03:40:24 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:40:24.581509 | orchestrator | 2026-01-02 03:40:24 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:40:24.581545 | orchestrator | 2026-01-02 03:40:24 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:40:27.629778 | orchestrator | 2026-01-02 03:40:27 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:40:27.632049 | orchestrator | 2026-01-02 03:40:27 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:40:27.632199 | orchestrator | 2026-01-02 03:40:27 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:40:30.679928 | orchestrator | 2026-01-02 03:40:30 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:40:30.681437 | orchestrator | 2026-01-02 03:40:30 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:40:30.681501 | orchestrator | 2026-01-02 03:40:30 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:40:33.731411 | orchestrator | 2026-01-02 03:40:33 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:40:33.732955 | orchestrator | 2026-01-02 03:40:33 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED[0m 2026-01-02 03:40:33.733027 | orchestrator | 2026-01-02 03:40:33 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:40:36.781380 | orchestrator | 2026-01-02 03:40:36 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:40:36.783668 | orchestrator | 2026-01-02 03:40:36 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:40:36.783756 | orchestrator | 2026-01-02 03:40:36 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:40:39.833790 | orchestrator | 2026-01-02 03:40:39 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:40:39.835345 | orchestrator | 2026-01-02 03:40:39 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:40:39.835407 | orchestrator | 2026-01-02 03:40:39 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:40:42.884022 | orchestrator | 2026-01-02 03:40:42 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:40:42.885606 | orchestrator | 2026-01-02 03:40:42 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:40:42.885640 | orchestrator | 2026-01-02 03:40:42 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:40:45.934773 | orchestrator | 2026-01-02 03:40:45 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:40:45.936704 | orchestrator | 2026-01-02 03:40:45 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:40:45.936756 | orchestrator | 2026-01-02 03:40:45 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:40:48.982124 | orchestrator | 2026-01-02 03:40:48 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:40:48.985041 | orchestrator | 2026-01-02 03:40:48 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:40:48.985105 | orchestrator | 2026-01-02 03:40:48 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:40:52.028784 | orchestrator | 2026-01-02 03:40:52 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:40:52.028993 | orchestrator | 2026-01-02 03:40:52 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:40:52.029016 | orchestrator | 2026-01-02 03:40:52 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:40:55.073717 | orchestrator | 2026-01-02 03:40:55 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:40:55.075835 | orchestrator | 2026-01-02 03:40:55 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:40:55.076340 | orchestrator | 2026-01-02 03:40:55 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:40:58.116820 | orchestrator | 2026-01-02 03:40:58 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:40:58.118517 | orchestrator | 2026-01-02 03:40:58 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:40:58.118552 | orchestrator | 2026-01-02 03:40:58 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:41:01.161740 | orchestrator | 2026-01-02 03:41:01 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:41:01.163409 | orchestrator | 2026-01-02 03:41:01 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:41:01.163704 | orchestrator | 2026-01-02 03:41:01 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:41:04.205995 | orchestrator | 2026-01-02 03:41:04 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:41:04.207259 | orchestrator | 2026-01-02 03:41:04 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:41:04.207291 | orchestrator | 2026-01-02 03:41:04 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:41:07.251102 | orchestrator | 2026-01-02 03:41:07 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:41:07.252229 | orchestrator | 2026-01-02 03:41:07 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:41:07.252383 | orchestrator | 2026-01-02 03:41:07 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:41:10.298262 | orchestrator | 2026-01-02 03:41:10 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:41:10.300075 | orchestrator | 2026-01-02 03:41:10 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:41:10.300136 | orchestrator | 2026-01-02 03:41:10 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:41:13.344822 | orchestrator | 2026-01-02 03:41:13 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:41:13.346368 | orchestrator | 2026-01-02 03:41:13 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:41:13.346725 | orchestrator | 2026-01-02 03:41:13 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:41:16.394807 | orchestrator | 2026-01-02 03:41:16 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:41:16.397874 | orchestrator | 2026-01-02 03:41:16 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:41:16.397948 | orchestrator | 2026-01-02 03:41:16 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:41:19.449666 | orchestrator | 2026-01-02 03:41:19 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:41:19.451819 | orchestrator | 2026-01-02 03:41:19 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:41:19.451870 | orchestrator | 2026-01-02 03:41:19 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:41:22.501673 | orchestrator | 2026-01-02 03:41:22 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:41:22.503968 | orchestrator | 2026-01-02 03:41:22 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:41:22.504022 | orchestrator | 2026-01-02 03:41:22 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:41:25.558130 | orchestrator | 2026-01-02 03:41:25 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:41:25.559638 | orchestrator | 2026-01-02 03:41:25 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:41:25.559676 | orchestrator | 2026-01-02 03:41:25 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:41:28.609723 | orchestrator | 2026-01-02 03:41:28 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:41:28.611995 | orchestrator | 2026-01-02 03:41:28 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:41:28.612076 | orchestrator | 2026-01-02 03:41:28 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:41:31.662289 | orchestrator | 2026-01-02 03:41:31 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:41:31.664737 | orchestrator | 2026-01-02 03:41:31 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:41:31.664798 | orchestrator | 2026-01-02 03:41:31 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:41:34.709169 | orchestrator | 2026-01-02 03:41:34 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:41:34.710746 | orchestrator | 2026-01-02 03:41:34 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:41:34.710804 | orchestrator | 2026-01-02 03:41:34 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:41:37.758387 | orchestrator | 2026-01-02 03:41:37 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:41:37.760787 | orchestrator | 2026-01-02 03:41:37 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:41:37.760876 | orchestrator | 2026-01-02 03:41:37 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:41:40.818366 | orchestrator | 2026-01-02 03:41:40 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:41:40.820352 | orchestrator | 2026-01-02 03:41:40 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:41:40.820410 | orchestrator | 2026-01-02 03:41:40 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:41:43.871141 | orchestrator | 2026-01-02 03:41:43 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:41:43.873033 | orchestrator | 2026-01-02 03:41:43 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:41:43.873092 | orchestrator | 2026-01-02 03:41:43 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:41:46.917172 | orchestrator | 2026-01-02 03:41:46 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:41:46.921181 | orchestrator | 2026-01-02 03:41:46 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:41:46.921233 | orchestrator | 2026-01-02 03:41:46 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:41:49.966143 | orchestrator | 2026-01-02 03:41:49 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:41:49.968789 | orchestrator | 2026-01-02 03:41:49 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:41:49.968883 | orchestrator | 2026-01-02 03:41:49 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:41:53.033455 | orchestrator | 2026-01-02 03:41:53 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:41:53.033734 | orchestrator | 2026-01-02 03:41:53 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:41:53.034225 | orchestrator | 2026-01-02 03:41:53 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:41:56.086240 | orchestrator | 2026-01-02 03:41:56 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:41:56.087959 | orchestrator | 2026-01-02 03:41:56 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:41:56.088106 | orchestrator | 2026-01-02 03:41:56 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:41:59.141079 | orchestrator | 2026-01-02 03:41:59 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:41:59.142900 | orchestrator | 2026-01-02 03:41:59 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:41:59.143113 | orchestrator | 2026-01-02 03:41:59 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:42:02.185667 | orchestrator | 2026-01-02 03:42:02 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:42:02.186103 | orchestrator | 2026-01-02 03:42:02 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:42:02.186136 | orchestrator | 2026-01-02 03:42:02 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:42:05.229228 | orchestrator | 2026-01-02 03:42:05 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:42:05.230464 | orchestrator | 2026-01-02 03:42:05 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:42:05.230553 | orchestrator | 2026-01-02 03:42:05 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:42:08.285954 | orchestrator | 2026-01-02 03:42:08 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:42:08.287081 | orchestrator | 2026-01-02 03:42:08 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:42:08.287122 | orchestrator | 2026-01-02 03:42:08 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:42:11.338868 | orchestrator | 2026-01-02 03:42:11 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:42:11.340613 | orchestrator | 2026-01-02 03:42:11 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:42:11.340840 | orchestrator | 2026-01-02 03:42:11 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:42:14.394205 | orchestrator | 2026-01-02 03:42:14 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:42:14.395834 | orchestrator | 2026-01-02 03:42:14 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:42:14.395930 | orchestrator | 2026-01-02 03:42:14 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:42:17.444678 | orchestrator | 2026-01-02 03:42:17 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:42:17.446410 | orchestrator | 2026-01-02 03:42:17 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:42:17.446481 | orchestrator | 2026-01-02 03:42:17 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:42:20.493480 | orchestrator | 2026-01-02 03:42:20 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:42:20.493582 | orchestrator | 2026-01-02 03:42:20 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:42:20.493598 | orchestrator | 2026-01-02 03:42:20 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:42:23.540199 | orchestrator | 2026-01-02 03:42:23 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:42:23.543680 | orchestrator | 2026-01-02 03:42:23 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:42:23.543953 | orchestrator | 2026-01-02 03:42:23 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:42:26.586330 | orchestrator | 2026-01-02 03:42:26 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:42:26.587438 | orchestrator | 2026-01-02 03:42:26 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:42:26.587475 | orchestrator | 2026-01-02 03:42:26 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:42:29.645595 | orchestrator | 2026-01-02 03:42:29 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:42:29.647849 | orchestrator | 2026-01-02 03:42:29 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:42:29.647895 | orchestrator | 2026-01-02 03:42:29 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:42:32.696129 | orchestrator | 2026-01-02 03:42:32 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:42:32.697332 | orchestrator | 2026-01-02 03:42:32 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:42:32.697382 | orchestrator | 2026-01-02 03:42:32 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:42:35.738490 | orchestrator | 2026-01-02 03:42:35 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:42:35.742375 | orchestrator | 2026-01-02 03:42:35 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:42:35.742621 | orchestrator | 2026-01-02 03:42:35 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:42:38.792906 | orchestrator | 2026-01-02 03:42:38 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:42:38.794378 | orchestrator | 2026-01-02 03:42:38 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:42:38.794434 | orchestrator | 2026-01-02 03:42:38 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:42:41.846498 | orchestrator | 2026-01-02 03:42:41 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:42:41.847986 | orchestrator | 2026-01-02 03:42:41 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:42:41.848034 | orchestrator | 2026-01-02 03:42:41 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:42:44.893559 | orchestrator | 2026-01-02 03:42:44 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:42:44.896782 | orchestrator | 2026-01-02 03:42:44 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:42:44.896837 | orchestrator | 2026-01-02 03:42:44 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:42:47.945681 | orchestrator | 2026-01-02 03:42:47 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:42:47.947132 | orchestrator | 2026-01-02 03:42:47 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:42:47.947421 | orchestrator | 2026-01-02 03:42:47 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:42:50.997918 | orchestrator | 2026-01-02 03:42:50 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:42:50.999341 | orchestrator | 2026-01-02 03:42:50 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:42:50.999483 | orchestrator | 2026-01-02 03:42:50 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:42:54.063652 | orchestrator | 2026-01-02 03:42:54 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:42:54.065814 | orchestrator | 2026-01-02 03:42:54 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:42:54.066092 | orchestrator | 2026-01-02 03:42:54 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:42:57.110619 | orchestrator | 2026-01-02 03:42:57 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:42:57.111939 | orchestrator | 2026-01-02 03:42:57 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:42:57.111957 | orchestrator | 2026-01-02 03:42:57 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:43:00.160944 | orchestrator | 2026-01-02 03:43:00 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:43:00.162881 | orchestrator | 2026-01-02 03:43:00 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:43:00.162937 | orchestrator | 2026-01-02 03:43:00 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:43:03.206364 | orchestrator | 2026-01-02 03:43:03 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:43:03.207823 | orchestrator | 2026-01-02 03:43:03 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:43:03.207867 | orchestrator | 2026-01-02 03:43:03 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:43:06.258245 | orchestrator | 2026-01-02 03:43:06 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:43:06.260295 | orchestrator | 2026-01-02 03:43:06 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:43:06.260360 | orchestrator | 2026-01-02 03:43:06 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:43:09.309178 | orchestrator | 2026-01-02 03:43:09 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:43:09.311070 | orchestrator | 2026-01-02 03:43:09 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:43:09.311157 | orchestrator | 2026-01-02 03:43:09 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:43:12.358624 | orchestrator | 2026-01-02 03:43:12 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:43:12.360914 | orchestrator | 2026-01-02 03:43:12 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:43:12.361045 | orchestrator | 2026-01-02 03:43:12 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:43:15.418576 | orchestrator | 2026-01-02 03:43:15 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:43:15.419918 | orchestrator | 2026-01-02 03:43:15 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:43:15.420046 | orchestrator | 2026-01-02 03:43:15 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:43:18.466913 | orchestrator | 2026-01-02 03:43:18 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:43:18.468298 | orchestrator | 2026-01-02 03:43:18 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:43:18.468527 | orchestrator | 2026-01-02 03:43:18 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:43:21.520683 | orchestrator | 2026-01-02 03:43:21 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:43:21.522500 | orchestrator | 2026-01-02 03:43:21 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:43:21.522545 | orchestrator | 2026-01-02 03:43:21 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:43:24.573888 | orchestrator | 2026-01-02 03:43:24 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:43:24.575809 | orchestrator | 2026-01-02 03:43:24 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:43:24.576138 | orchestrator | 2026-01-02 03:43:24 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:43:27.626062 | orchestrator | 2026-01-02 03:43:27 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:43:27.627563 | orchestrator | 2026-01-02 03:43:27 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:43:27.627595 | orchestrator | 2026-01-02 03:43:27 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:43:30.681303 | orchestrator | 2026-01-02 03:43:30 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:43:30.682607 | orchestrator | 2026-01-02 03:43:30 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:43:30.682663 | orchestrator | 2026-01-02 03:43:30 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:43:33.730290 | orchestrator | 2026-01-02 03:43:33 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:43:33.733837 | orchestrator | 2026-01-02 03:43:33 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:43:33.733901 | orchestrator | 2026-01-02 03:43:33 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:43:36.775641 | orchestrator | 2026-01-02 03:43:36 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:43:36.776687 | orchestrator | 2026-01-02 03:43:36 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:43:36.776765 | orchestrator | 2026-01-02 03:43:36 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:43:39.822872 | orchestrator | 2026-01-02 03:43:39 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:43:39.824300 | orchestrator | 2026-01-02 03:43:39 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:43:39.824431 | orchestrator | 2026-01-02 03:43:39 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:43:42.870826 | orchestrator | 2026-01-02 03:43:42 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:43:42.872137 | orchestrator | 2026-01-02 03:43:42 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:43:42.872175 | orchestrator | 2026-01-02 03:43:42 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:43:45.923343 | orchestrator | 2026-01-02 03:43:45 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:43:45.924926 | orchestrator | 2026-01-02 03:43:45 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:43:45.924982 | orchestrator | 2026-01-02 03:43:45 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:43:48.971636 | orchestrator | 2026-01-02 03:43:48 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:43:48.974264 | orchestrator | 2026-01-02 03:43:48 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:43:48.974410 | orchestrator | 2026-01-02 03:43:48 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:43:52.022561 | orchestrator | 2026-01-02 03:43:52 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:43:52.024838 | orchestrator | 2026-01-02 03:43:52 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:43:52.024945 | orchestrator | 2026-01-02 03:43:52 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:43:55.074639 | orchestrator | 2026-01-02 03:43:55 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:43:55.075721 | orchestrator | 2026-01-02 03:43:55 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:43:55.075781 | orchestrator | 2026-01-02 03:43:55 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:43:58.119662 | orchestrator | 2026-01-02 03:43:58 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:43:58.120666 | orchestrator | 2026-01-02 03:43:58 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:43:58.120732 | orchestrator | 2026-01-02 03:43:58 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:44:01.161918 | orchestrator | 2026-01-02 03:44:01 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:44:01.163873 | orchestrator | 2026-01-02 03:44:01 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:44:01.163908 | orchestrator | 2026-01-02 03:44:01 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:44:04.207209 | orchestrator | 2026-01-02 03:44:04 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:44:04.209415 | orchestrator | 2026-01-02 03:44:04 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:44:04.209453 | orchestrator | 2026-01-02 03:44:04 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:44:07.256249 | orchestrator | 2026-01-02 03:44:07 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:44:07.257429 | orchestrator | 2026-01-02 03:44:07 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:44:07.257661 | orchestrator | 2026-01-02 03:44:07 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:44:10.304892 | orchestrator | 2026-01-02 03:44:10 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:44:10.306355 | orchestrator | 2026-01-02 03:44:10 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:44:10.306430 | orchestrator | 2026-01-02 03:44:10 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:44:13.352211 | orchestrator | 2026-01-02 03:44:13 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:44:13.354485 | orchestrator | 2026-01-02 03:44:13 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:44:13.354622 | orchestrator | 2026-01-02 03:44:13 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:44:16.394893 | orchestrator | 2026-01-02 03:44:16 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:44:16.396858 | orchestrator | 2026-01-02 03:44:16 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:44:16.396928 | orchestrator | 2026-01-02 03:44:16 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:44:19.450797 | orchestrator | 2026-01-02 03:44:19 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:44:19.452910 | orchestrator | 2026-01-02 03:44:19 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:44:19.452959 | orchestrator | 2026-01-02 03:44:19 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:44:22.499486 | orchestrator | 2026-01-02 03:44:22 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:44:22.500601 | orchestrator | 2026-01-02 03:44:22 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:44:22.500664 | orchestrator | 2026-01-02 03:44:22 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:44:25.548587 | orchestrator | 2026-01-02 03:44:25 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:44:25.549803 | orchestrator | 2026-01-02 03:44:25 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:44:25.549987 | orchestrator | 2026-01-02 03:44:25 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:44:28.601138 | orchestrator | 2026-01-02 03:44:28 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:44:28.603364 | orchestrator | 2026-01-02 03:44:28 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:44:28.603419 | orchestrator | 2026-01-02 03:44:28 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:44:31.649070 | orchestrator | 2026-01-02 03:44:31 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:44:31.651019 | orchestrator | 2026-01-02 03:44:31 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:44:31.651126 | orchestrator | 2026-01-02 03:44:31 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:44:34.699404 | orchestrator | 2026-01-02 03:44:34 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:44:34.701380 | orchestrator | 2026-01-02 03:44:34 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:44:34.701481 | orchestrator | 2026-01-02 03:44:34 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:44:37.743231 | orchestrator | 2026-01-02 03:44:37 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:44:37.745020 | orchestrator | 2026-01-02 03:44:37 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:44:37.745080 | orchestrator | 2026-01-02 03:44:37 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:44:40.792921 | orchestrator | 2026-01-02 03:44:40 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:44:40.794947 | orchestrator | 2026-01-02 03:44:40 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:44:40.795013 | orchestrator | 2026-01-02 03:44:40 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:44:43.841094 | orchestrator | 2026-01-02 03:44:43 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:44:43.842347 | orchestrator | 2026-01-02 03:44:43 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:44:43.842553 | orchestrator | 2026-01-02 03:44:43 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:44:46.885689 | orchestrator | 2026-01-02 03:44:46 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:44:46.887919 | orchestrator | 2026-01-02 03:44:46 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:44:46.887981 | orchestrator | 2026-01-02 03:44:46 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:44:49.933581 | orchestrator | 2026-01-02 03:44:49 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:44:49.935105 | orchestrator | 2026-01-02 03:44:49 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:44:49.935173 | orchestrator | 2026-01-02 03:44:49 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:44:52.975440 | orchestrator | 2026-01-02 03:44:52 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:44:52.977514 | orchestrator | 2026-01-02 03:44:52 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:44:52.978148 | orchestrator | 2026-01-02 03:44:52 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:44:56.023639 | orchestrator | 2026-01-02 03:44:56 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:44:56.025436 | orchestrator | 2026-01-02 03:44:56 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:44:56.025490 | orchestrator | 2026-01-02 03:44:56 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:44:59.071189 | orchestrator | 2026-01-02 03:44:59 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:44:59.072522 | orchestrator | 2026-01-02 03:44:59 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:44:59.072558 | orchestrator | 2026-01-02 03:44:59 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:45:02.111379 | orchestrator | 2026-01-02 03:45:02 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:45:02.112421 | orchestrator | 2026-01-02 03:45:02 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:45:02.112504 | orchestrator | 2026-01-02 03:45:02 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:45:05.161107 | orchestrator | 2026-01-02 03:45:05 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:45:05.163642 | orchestrator | 2026-01-02 03:45:05 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:45:05.163776 | orchestrator | 2026-01-02 03:45:05 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:45:08.224851 | orchestrator | 2026-01-02 03:45:08 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:45:08.227990 | orchestrator | 2026-01-02 03:45:08 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:45:08.228071 | orchestrator | 2026-01-02 03:45:08 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:45:11.276304 | orchestrator | 2026-01-02 03:45:11 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:45:11.277937 | orchestrator | 2026-01-02 03:45:11 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:45:11.278178 | orchestrator | 2026-01-02 03:45:11 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:45:14.327122 | orchestrator | 2026-01-02 03:45:14 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:45:14.331013 | orchestrator | 2026-01-02 03:45:14 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:45:14.331102 | orchestrator | 2026-01-02 03:45:14 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:45:17.377137 | orchestrator | 2026-01-02 03:45:17 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:45:17.379267 | orchestrator | 2026-01-02 03:45:17 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:45:17.379312 | orchestrator | 2026-01-02 03:45:17 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:45:20.431638 | orchestrator | 2026-01-02 03:45:20 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:45:20.433860 | orchestrator | 2026-01-02 03:45:20 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:45:20.433893 | orchestrator | 2026-01-02 03:45:20 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:45:23.481581 | orchestrator | 2026-01-02 03:45:23 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:45:23.483004 | orchestrator | 2026-01-02 03:45:23 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:45:23.483050 | orchestrator | 2026-01-02 03:45:23 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:45:26.529374 | orchestrator | 2026-01-02 03:45:26 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:45:26.531042 | orchestrator | 2026-01-02 03:45:26 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:45:26.531110 | orchestrator | 2026-01-02 03:45:26 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:45:29.581950 | orchestrator | 2026-01-02 03:45:29 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:45:29.583877 | orchestrator | 2026-01-02 03:45:29 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:45:29.583929 | orchestrator | 2026-01-02 03:45:29 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:45:32.627938 | orchestrator | 2026-01-02 03:45:32 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:45:32.629303 | orchestrator | 2026-01-02 03:45:32 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:45:32.629355 | orchestrator | 2026-01-02 03:45:32 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:45:35.675208 | orchestrator | 2026-01-02 03:45:35 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:45:35.676475 | orchestrator | 2026-01-02 03:45:35 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:45:35.676516 | orchestrator | 2026-01-02 03:45:35 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:45:38.718924 | orchestrator | 2026-01-02 03:45:38 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:45:38.721035 | orchestrator | 2026-01-02 03:45:38 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:45:38.721090 | orchestrator | 2026-01-02 03:45:38 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:45:41.769321 | orchestrator | 2026-01-02 03:45:41 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:45:41.770834 | orchestrator | 2026-01-02 03:45:41 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:45:41.771031 | orchestrator | 2026-01-02 03:45:41 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:45:44.817340 | orchestrator | 2026-01-02 03:45:44 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:45:44.818335 | orchestrator | 2026-01-02 03:45:44 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:45:44.818469 | orchestrator | 2026-01-02 03:45:44 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:45:47.862320 | orchestrator | 2026-01-02 03:45:47 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:45:47.863580 | orchestrator | 2026-01-02 03:45:47 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:45:47.863636 | orchestrator | 2026-01-02 03:45:47 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:45:50.914535 | orchestrator | 2026-01-02 03:45:50 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:45:50.916545 | orchestrator | 2026-01-02 03:45:50 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:45:50.916603 | orchestrator | 2026-01-02 03:45:50 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:45:53.962324 | orchestrator | 2026-01-02 03:45:53 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:45:53.964108 | orchestrator | 2026-01-02 03:45:53 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:45:53.964190 | orchestrator | 2026-01-02 03:45:53 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:45:57.013046 | orchestrator | 2026-01-02 03:45:57 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:45:57.014333 | orchestrator | 2026-01-02 03:45:57 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:45:57.014368 | orchestrator | 2026-01-02 03:45:57 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:46:00.055671 | orchestrator | 2026-01-02 03:46:00 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:46:00.056666 | orchestrator | 2026-01-02 03:46:00 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:46:00.056690 | orchestrator | 2026-01-02 03:46:00 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:46:03.095604 | orchestrator | 2026-01-02 03:46:03 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:46:03.096777 | orchestrator | 2026-01-02 03:46:03 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:46:03.097157 | orchestrator | 2026-01-02 03:46:03 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:46:06.141616 | orchestrator | 2026-01-02 03:46:06 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:46:06.143379 | orchestrator | 2026-01-02 03:46:06 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:46:06.143465 | orchestrator | 2026-01-02 03:46:06 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:46:09.196465 | orchestrator | 2026-01-02 03:46:09 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:46:09.197546 | orchestrator | 2026-01-02 03:46:09 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:46:09.197630 | orchestrator | 2026-01-02 03:46:09 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:46:12.246815 | orchestrator | 2026-01-02 03:46:12 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:46:12.249208 | orchestrator | 2026-01-02 03:46:12 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:46:12.249354 | orchestrator | 2026-01-02 03:46:12 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:46:15.296606 | orchestrator | 2026-01-02 03:46:15 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:46:15.298307 | orchestrator | 2026-01-02 03:46:15 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:46:15.298511 | orchestrator | 2026-01-02 03:46:15 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:46:18.343664 | orchestrator | 2026-01-02 03:46:18 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:46:18.345802 | orchestrator | 2026-01-02 03:46:18 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:46:18.345856 | orchestrator | 2026-01-02 03:46:18 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:46:21.392036 | orchestrator | 2026-01-02 03:46:21 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:46:21.394175 | orchestrator | 2026-01-02 03:46:21 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:46:21.394358 | orchestrator | 2026-01-02 03:46:21 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:46:24.442979 | orchestrator | 2026-01-02 03:46:24 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:46:24.444116 | orchestrator | 2026-01-02 03:46:24 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:46:24.444181 | orchestrator | 2026-01-02 03:46:24 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:46:27.491338 | orchestrator | 2026-01-02 03:46:27 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:46:27.493376 | orchestrator | 2026-01-02 03:46:27 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:46:27.493402 | orchestrator | 2026-01-02 03:46:27 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:46:30.536396 | orchestrator | 2026-01-02 03:46:30 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:46:30.537595 | orchestrator | 2026-01-02 03:46:30 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:46:30.537635 | orchestrator | 2026-01-02 03:46:30 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:46:33.584180 | orchestrator | 2026-01-02 03:46:33 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:46:33.585365 | orchestrator | 2026-01-02 03:46:33 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:46:33.585412 | orchestrator | 2026-01-02 03:46:33 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:46:36.631878 | orchestrator | 2026-01-02 03:46:36 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:46:36.633206 | orchestrator | 2026-01-02 03:46:36 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:46:36.633258 | orchestrator | 2026-01-02 03:46:36 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:46:39.681648 | orchestrator | 2026-01-02 03:46:39 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:46:39.684002 | orchestrator | 2026-01-02 03:46:39 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:46:39.684154 | orchestrator | 2026-01-02 03:46:39 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:46:42.729848 | orchestrator | 2026-01-02 03:46:42 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:46:42.731562 | orchestrator | 2026-01-02 03:46:42 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:46:42.731679 | orchestrator | 2026-01-02 03:46:42 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:46:45.770275 | orchestrator | 2026-01-02 03:46:45 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:46:45.771879 | orchestrator | 2026-01-02 03:46:45 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:46:45.771929 | orchestrator | 2026-01-02 03:46:45 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:46:48.813355 | orchestrator | 2026-01-02 03:46:48 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:46:48.814588 | orchestrator | 2026-01-02 03:46:48 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:46:48.814615 | orchestrator | 2026-01-02 03:46:48 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:46:51.846249 | orchestrator | 2026-01-02 03:46:51 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:46:51.848040 | orchestrator | 2026-01-02 03:46:51 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:46:51.848109 | orchestrator | 2026-01-02 03:46:51 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:46:54.896511 | orchestrator | 2026-01-02 03:46:54 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:46:54.897617 | orchestrator | 2026-01-02 03:46:54 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:46:54.897735 | orchestrator | 2026-01-02 03:46:54 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:46:57.948388 | orchestrator | 2026-01-02 03:46:57 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:46:57.949403 | orchestrator | 2026-01-02 03:46:57 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:46:57.949436 | orchestrator | 2026-01-02 03:46:57 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:47:00.997671 | orchestrator | 2026-01-02 03:47:00 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:47:01.000880 | orchestrator | 2026-01-02 03:47:01 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:47:01.001094 | orchestrator | 2026-01-02 03:47:01 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:47:04.057449 | orchestrator | 2026-01-02 03:47:04 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:47:04.059632 | orchestrator | 2026-01-02 03:47:04 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:47:04.059671 | orchestrator | 2026-01-02 03:47:04 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:47:07.107094 | orchestrator | 2026-01-02 03:47:07 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:47:07.108567 | orchestrator | 2026-01-02 03:47:07 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:47:07.108615 | orchestrator | 2026-01-02 03:47:07 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:47:10.162231 | orchestrator | 2026-01-02 03:47:10 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:47:10.163766 | orchestrator | 2026-01-02 03:47:10 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:47:10.163813 | orchestrator | 2026-01-02 03:47:10 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:47:13.212368 | orchestrator | 2026-01-02 03:47:13 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:47:13.215329 | orchestrator | 2026-01-02 03:47:13 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:47:13.215683 | orchestrator | 2026-01-02 03:47:13 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:47:16.258236 | orchestrator | 2026-01-02 03:47:16 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:47:16.259509 | orchestrator | 2026-01-02 03:47:16 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:47:16.259676 | orchestrator | 2026-01-02 03:47:16 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:47:19.306142 | orchestrator | 2026-01-02 03:47:19 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:47:19.308789 | orchestrator | 2026-01-02 03:47:19 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:47:19.308851 | orchestrator | 2026-01-02 03:47:19 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:47:22.353103 | orchestrator | 2026-01-02 03:47:22 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:47:22.354498 | orchestrator | 2026-01-02 03:47:22 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:47:22.354590 | orchestrator | 2026-01-02 03:47:22 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:47:25.399856 | orchestrator | 2026-01-02 03:47:25 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:47:25.400612 | orchestrator | 2026-01-02 03:47:25 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:47:25.400657 | orchestrator | 2026-01-02 03:47:25 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:47:28.449698 | orchestrator | 2026-01-02 03:47:28 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:47:28.451609 | orchestrator | 2026-01-02 03:47:28 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:47:28.451649 | orchestrator | 2026-01-02 03:47:28 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:47:31.497853 | orchestrator | 2026-01-02 03:47:31 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:47:31.500010 | orchestrator | 2026-01-02 03:47:31 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:47:31.500126 | orchestrator | 2026-01-02 03:47:31 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:47:34.546299 | orchestrator | 2026-01-02 03:47:34 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:47:34.547554 | orchestrator | 2026-01-02 03:47:34 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:47:34.547603 | orchestrator | 2026-01-02 03:47:34 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:47:37.598163 | orchestrator | 2026-01-02 03:47:37 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:47:37.600033 | orchestrator | 2026-01-02 03:47:37 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:47:37.600132 | orchestrator | 2026-01-02 03:47:37 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:47:40.649017 | orchestrator | 2026-01-02 03:47:40 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:47:40.650528 | orchestrator | 2026-01-02 03:47:40 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:47:40.650573 | orchestrator | 2026-01-02 03:47:40 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:47:43.697646 | orchestrator | 2026-01-02 03:47:43 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:47:43.700283 | orchestrator | 2026-01-02 03:47:43 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:47:43.700387 | orchestrator | 2026-01-02 03:47:43 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:47:46.742776 | orchestrator | 2026-01-02 03:47:46 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:47:46.744056 | orchestrator | 2026-01-02 03:47:46 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:47:46.744170 | orchestrator | 2026-01-02 03:47:46 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:47:49.789119 | orchestrator | 2026-01-02 03:47:49 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:47:49.791244 | orchestrator | 2026-01-02 03:47:49 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:47:49.791299 | orchestrator | 2026-01-02 03:47:49 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:47:52.835994 | orchestrator | 2026-01-02 03:47:52 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:47:52.837214 | orchestrator | 2026-01-02 03:47:52 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:47:52.837276 | orchestrator | 2026-01-02 03:47:52 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:47:55.882672 | orchestrator | 2026-01-02 03:47:55 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:47:55.883412 | orchestrator | 2026-01-02 03:47:55 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:47:55.883445 | orchestrator | 2026-01-02 03:47:55 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:47:58.928251 | orchestrator | 2026-01-02 03:47:58 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:47:58.929851 | orchestrator | 2026-01-02 03:47:58 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:47:58.929897 | orchestrator | 2026-01-02 03:47:58 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:48:01.969024 | orchestrator | 2026-01-02 03:48:01 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:48:01.969255 | orchestrator | 2026-01-02 03:48:01 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:48:01.969307 | orchestrator | 2026-01-02 03:48:01 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:48:05.015752 | orchestrator | 2026-01-02 03:48:05 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:48:05.017996 | orchestrator | 2026-01-02 03:48:05 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:48:05.018157 | orchestrator | 2026-01-02 03:48:05 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:48:08.060541 | orchestrator | 2026-01-02 03:48:08 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:48:08.062441 | orchestrator | 2026-01-02 03:48:08 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:48:08.062556 | orchestrator | 2026-01-02 03:48:08 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:48:11.115159 | orchestrator | 2026-01-02 03:48:11 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:48:11.187893 | orchestrator | 2026-01-02 03:48:11 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:48:11.187972 | orchestrator | 2026-01-02 03:48:11 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:48:14.156907 | orchestrator | 2026-01-02 03:48:14 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:48:14.158304 | orchestrator | 2026-01-02 03:48:14 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:48:14.158378 | orchestrator | 2026-01-02 03:48:14 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:48:17.207247 | orchestrator | 2026-01-02 03:48:17 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:48:17.209001 | orchestrator | 2026-01-02 03:48:17 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:48:17.209155 | orchestrator | 2026-01-02 03:48:17 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:48:20.250710 | orchestrator | 2026-01-02 03:48:20 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:48:20.252483 | orchestrator | 2026-01-02 03:48:20 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:48:20.252576 | orchestrator | 2026-01-02 03:48:20 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:48:23.301118 | orchestrator | 2026-01-02 03:48:23 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:48:23.302390 | orchestrator | 2026-01-02 03:48:23 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:48:23.302432 | orchestrator | 2026-01-02 03:48:23 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:48:26.346669 | orchestrator | 2026-01-02 03:48:26 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:48:26.347689 | orchestrator | 2026-01-02 03:48:26 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:48:26.347892 | orchestrator | 2026-01-02 03:48:26 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:48:29.400188 | orchestrator | 2026-01-02 03:48:29 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:48:29.401791 | orchestrator | 2026-01-02 03:48:29 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:48:29.401861 | orchestrator | 2026-01-02 03:48:29 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:48:32.447445 | orchestrator | 2026-01-02 03:48:32 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:48:32.449368 | orchestrator | 2026-01-02 03:48:32 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:48:32.449431 | orchestrator | 2026-01-02 03:48:32 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:48:35.495277 | orchestrator | 2026-01-02 03:48:35 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:48:35.495851 | orchestrator | 2026-01-02 03:48:35 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:48:35.495885 | orchestrator | 2026-01-02 03:48:35 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:48:38.542660 | orchestrator | 2026-01-02 03:48:38 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:48:38.543906 | orchestrator | 2026-01-02 03:48:38 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:48:38.543952 | orchestrator | 2026-01-02 03:48:38 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:48:41.596382 | orchestrator | 2026-01-02 03:48:41 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:48:41.601597 | orchestrator | 2026-01-02 03:48:41 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:48:41.601656 | orchestrator | 2026-01-02 03:48:41 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:48:44.648186 | orchestrator | 2026-01-02 03:48:44 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:48:44.650415 | orchestrator | 2026-01-02 03:48:44 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:48:44.650459 | orchestrator | 2026-01-02 03:48:44 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:48:47.695805 | orchestrator | 2026-01-02 03:48:47 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:48:47.697239 | orchestrator | 2026-01-02 03:48:47 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:48:47.697539 | orchestrator | 2026-01-02 03:48:47 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:48:50.744823 | orchestrator | 2026-01-02 03:48:50 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:48:50.745881 | orchestrator | 2026-01-02 03:48:50 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:48:50.746118 | orchestrator | 2026-01-02 03:48:50 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:48:53.790616 | orchestrator | 2026-01-02 03:48:53 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:48:53.792769 | orchestrator | 2026-01-02 03:48:53 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:48:53.792839 | orchestrator | 2026-01-02 03:48:53 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:48:56.840245 | orchestrator | 2026-01-02 03:48:56 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:48:56.842778 | orchestrator | 2026-01-02 03:48:56 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:48:56.843099 | orchestrator | 2026-01-02 03:48:56 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:48:59.892450 | orchestrator | 2026-01-02 03:48:59 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:48:59.893672 | orchestrator | 2026-01-02 03:48:59 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:48:59.893786 | orchestrator | 2026-01-02 03:48:59 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:49:02.943073 | orchestrator | 2026-01-02 03:49:02 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:49:02.945597 | orchestrator | 2026-01-02 03:49:02 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:49:02.945638 | orchestrator | 2026-01-02 03:49:02 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:49:05.985706 | orchestrator | 2026-01-02 03:49:05 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:49:05.987661 | orchestrator | 2026-01-02 03:49:05 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:49:05.987786 | orchestrator | 2026-01-02 03:49:05 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:49:09.026183 | orchestrator | 2026-01-02 03:49:09 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:49:09.027885 | orchestrator | 2026-01-02 03:49:09 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:49:09.027972 | orchestrator | 2026-01-02 03:49:09 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:49:12.075010 | orchestrator | 2026-01-02 03:49:12 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:49:12.075114 | orchestrator | 2026-01-02 03:49:12 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:49:12.075167 | orchestrator | 2026-01-02 03:49:12 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:49:15.126005 | orchestrator | 2026-01-02 03:49:15 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:49:15.127697 | orchestrator | 2026-01-02 03:49:15 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:49:15.127833 | orchestrator | 2026-01-02 03:49:15 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:49:18.173702 | orchestrator | 2026-01-02 03:49:18 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:49:18.175758 | orchestrator | 2026-01-02 03:49:18 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:49:18.175893 | orchestrator | 2026-01-02 03:49:18 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:49:21.226125 | orchestrator | 2026-01-02 03:49:21 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:49:21.227842 | orchestrator | 2026-01-02 03:49:21 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:49:21.227912 | orchestrator | 2026-01-02 03:49:21 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:49:24.275339 | orchestrator | 2026-01-02 03:49:24 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:49:24.278133 | orchestrator | 2026-01-02 03:49:24 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:49:24.278163 | orchestrator | 2026-01-02 03:49:24 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:49:27.328327 | orchestrator | 2026-01-02 03:49:27 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:49:27.330190 | orchestrator | 2026-01-02 03:49:27 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:49:27.330304 | orchestrator | 2026-01-02 03:49:27 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:49:30.380516 | orchestrator | 2026-01-02 03:49:30 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:49:30.381846 | orchestrator | 2026-01-02 03:49:30 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:49:30.382132 | orchestrator | 2026-01-02 03:49:30 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:49:33.425250 | orchestrator | 2026-01-02 03:49:33 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:49:33.426868 | orchestrator | 2026-01-02 03:49:33 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:49:33.426947 | orchestrator | 2026-01-02 03:49:33 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:49:36.474964 | orchestrator | 2026-01-02 03:49:36 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:49:36.476828 | orchestrator | 2026-01-02 03:49:36 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:49:36.476867 | orchestrator | 2026-01-02 03:49:36 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:49:39.523224 | orchestrator | 2026-01-02 03:49:39 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:49:39.524715 | orchestrator | 2026-01-02 03:49:39 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:49:39.524795 | orchestrator | 2026-01-02 03:49:39 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:49:42.566172 | orchestrator | 2026-01-02 03:49:42 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:49:42.568103 | orchestrator | 2026-01-02 03:49:42 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:49:42.568141 | orchestrator | 2026-01-02 03:49:42 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:49:45.618831 | orchestrator | 2026-01-02 03:49:45 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:49:45.620772 | orchestrator | 2026-01-02 03:49:45 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:49:45.620898 | orchestrator | 2026-01-02 03:49:45 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:49:48.664032 | orchestrator | 2026-01-02 03:49:48 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:49:48.665336 | orchestrator | 2026-01-02 03:49:48 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:49:48.665392 | orchestrator | 2026-01-02 03:49:48 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:49:51.712524 | orchestrator | 2026-01-02 03:49:51 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:49:51.713652 | orchestrator | 2026-01-02 03:49:51 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:49:51.713689 | orchestrator | 2026-01-02 03:49:51 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:49:54.756270 | orchestrator | 2026-01-02 03:49:54 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:49:54.758327 | orchestrator | 2026-01-02 03:49:54 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:49:54.758390 | orchestrator | 2026-01-02 03:49:54 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:49:57.799053 | orchestrator | 2026-01-02 03:49:57 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:49:57.801043 | orchestrator | 2026-01-02 03:49:57 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:49:57.801103 | orchestrator | 2026-01-02 03:49:57 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:50:00.844952 | orchestrator | 2026-01-02 03:50:00 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:50:00.846261 | orchestrator | 2026-01-02 03:50:00 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:50:00.846288 | orchestrator | 2026-01-02 03:50:00 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:50:03.891211 | orchestrator | 2026-01-02 03:50:03 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:50:03.893036 | orchestrator | 2026-01-02 03:50:03 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:50:03.893079 | orchestrator | 2026-01-02 03:50:03 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:50:06.941340 | orchestrator | 2026-01-02 03:50:06 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:50:06.942509 | orchestrator | 2026-01-02 03:50:06 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:50:06.942760 | orchestrator | 2026-01-02 03:50:06 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:50:09.992113 | orchestrator | 2026-01-02 03:50:09 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:50:09.992804 | orchestrator | 2026-01-02 03:50:09 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:50:09.992837 | orchestrator | 2026-01-02 03:50:09 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:50:13.041949 | orchestrator | 2026-01-02 03:50:13 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:50:13.043349 | orchestrator | 2026-01-02 03:50:13 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:50:13.043377 | orchestrator | 2026-01-02 03:50:13 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:50:16.101114 | orchestrator | 2026-01-02 03:50:16 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:50:16.102091 | orchestrator | 2026-01-02 03:50:16 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:50:16.102376 | orchestrator | 2026-01-02 03:50:16 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:50:19.149035 | orchestrator | 2026-01-02 03:50:19 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:50:19.151826 | orchestrator | 2026-01-02 03:50:19 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:50:19.151987 | orchestrator | 2026-01-02 03:50:19 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:50:22.203257 | orchestrator | 2026-01-02 03:50:22 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:50:22.204288 | orchestrator | 2026-01-02 03:50:22 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:50:22.204675 | orchestrator | 2026-01-02 03:50:22 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:50:25.253120 | orchestrator | 2026-01-02 03:50:25 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:50:25.255603 | orchestrator | 2026-01-02 03:50:25 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:50:25.255901 | orchestrator | 2026-01-02 03:50:25 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:50:28.306432 | orchestrator | 2026-01-02 03:50:28 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:50:28.310174 | orchestrator | 2026-01-02 03:50:28 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:50:28.310234 | orchestrator | 2026-01-02 03:50:28 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:50:31.359631 | orchestrator | 2026-01-02 03:50:31 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:50:31.360589 | orchestrator | 2026-01-02 03:50:31 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:50:31.360605 | orchestrator | 2026-01-02 03:50:31 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:50:34.411359 | orchestrator | 2026-01-02 03:50:34 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:50:34.413141 | orchestrator | 2026-01-02 03:50:34 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:50:34.413193 | orchestrator | 2026-01-02 03:50:34 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:50:37.463633 | orchestrator | 2026-01-02 03:50:37 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:50:37.465889 | orchestrator | 2026-01-02 03:50:37 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:50:37.465980 | orchestrator | 2026-01-02 03:50:37 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:50:40.509478 | orchestrator | 2026-01-02 03:50:40 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:50:40.513022 | orchestrator | 2026-01-02 03:50:40 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:50:40.513081 | orchestrator | 2026-01-02 03:50:40 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:50:43.557472 | orchestrator | 2026-01-02 03:50:43 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:50:43.559637 | orchestrator | 2026-01-02 03:50:43 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:50:43.559689 | orchestrator | 2026-01-02 03:50:43 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:50:46.608148 | orchestrator | 2026-01-02 03:50:46 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:50:46.610092 | orchestrator | 2026-01-02 03:50:46 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:50:46.610187 | orchestrator | 2026-01-02 03:50:46 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:50:49.660812 | orchestrator | 2026-01-02 03:50:49 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:50:49.663044 | orchestrator | 2026-01-02 03:50:49 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:50:49.663137 | orchestrator | 2026-01-02 03:50:49 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:50:52.714893 | orchestrator | 2026-01-02 03:50:52 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:50:52.716323 | orchestrator | 2026-01-02 03:50:52 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:50:52.716372 | orchestrator | 2026-01-02 03:50:52 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:50:55.763868 | orchestrator | 2026-01-02 03:50:55 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:50:55.765754 | orchestrator | 2026-01-02 03:50:55 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:50:55.765794 | orchestrator | 2026-01-02 03:50:55 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:50:58.817025 | orchestrator | 2026-01-02 03:50:58 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:50:58.818754 | orchestrator | 2026-01-02 03:50:58 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:50:58.818859 | orchestrator | 2026-01-02 03:50:58 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:51:01.863155 | orchestrator | 2026-01-02 03:51:01 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:51:01.864120 | orchestrator | 2026-01-02 03:51:01 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:51:01.864301 | orchestrator | 2026-01-02 03:51:01 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:51:04.902987 | orchestrator | 2026-01-02 03:51:04 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:51:04.904462 | orchestrator | 2026-01-02 03:51:04 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:51:04.904826 | orchestrator | 2026-01-02 03:51:04 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:51:07.950817 | orchestrator | 2026-01-02 03:51:07 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:51:07.952487 | orchestrator | 2026-01-02 03:51:07 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:51:07.952524 | orchestrator | 2026-01-02 03:51:07 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:51:10.998331 | orchestrator | 2026-01-02 03:51:10 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:51:10.999266 | orchestrator | 2026-01-02 03:51:10 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:51:10.999483 | orchestrator | 2026-01-02 03:51:10 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:51:14.057681 | orchestrator | 2026-01-02 03:51:14 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:51:14.059995 | orchestrator | 2026-01-02 03:51:14 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:51:14.060081 | orchestrator | 2026-01-02 03:51:14 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:51:17.108316 | orchestrator | 2026-01-02 03:51:17 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:51:17.110206 | orchestrator | 2026-01-02 03:51:17 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:51:17.110229 | orchestrator | 2026-01-02 03:51:17 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:51:20.158116 | orchestrator | 2026-01-02 03:51:20 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:51:20.160572 | orchestrator | 2026-01-02 03:51:20 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:51:20.160628 | orchestrator | 2026-01-02 03:51:20 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:51:23.205962 | orchestrator | 2026-01-02 03:51:23 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:51:23.207499 | orchestrator | 2026-01-02 03:51:23 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:51:23.207550 | orchestrator | 2026-01-02 03:51:23 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:51:26.250156 | orchestrator | 2026-01-02 03:51:26 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:51:26.252180 | orchestrator | 2026-01-02 03:51:26 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:51:26.252214 | orchestrator | 2026-01-02 03:51:26 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:51:29.298328 | orchestrator | 2026-01-02 03:51:29 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:51:29.299528 | orchestrator | 2026-01-02 03:51:29 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:51:29.299698 | orchestrator | 2026-01-02 03:51:29 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:51:32.346189 | orchestrator | 2026-01-02 03:51:32 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:51:32.347857 | orchestrator | 2026-01-02 03:51:32 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:51:32.347920 | orchestrator | 2026-01-02 03:51:32 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:51:35.391148 | orchestrator | 2026-01-02 03:51:35 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:51:35.392975 | orchestrator | 2026-01-02 03:51:35 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:51:35.393033 | orchestrator | 2026-01-02 03:51:35 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:51:38.452663 | orchestrator | 2026-01-02 03:51:38 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:51:38.454715 | orchestrator | 2026-01-02 03:51:38 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:51:38.454850 | orchestrator | 2026-01-02 03:51:38 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:51:41.499581 | orchestrator | 2026-01-02 03:51:41 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:51:41.501568 | orchestrator | 2026-01-02 03:51:41 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:51:41.502088 | orchestrator | 2026-01-02 03:51:41 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:51:44.548215 | orchestrator | 2026-01-02 03:51:44 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:51:44.551176 | orchestrator | 2026-01-02 03:51:44 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:51:44.551230 | orchestrator | 2026-01-02 03:51:44 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:51:47.599969 | orchestrator | 2026-01-02 03:51:47 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:51:47.600837 | orchestrator | 2026-01-02 03:51:47 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:51:47.600870 | orchestrator | 2026-01-02 03:51:47 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:51:50.647264 | orchestrator | 2026-01-02 03:51:50 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:51:50.648798 | orchestrator | 2026-01-02 03:51:50 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:51:50.648851 | orchestrator | 2026-01-02 03:51:50 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:51:53.695165 | orchestrator | 2026-01-02 03:51:53 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:51:53.696816 | orchestrator | 2026-01-02 03:51:53 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:51:53.696890 | orchestrator | 2026-01-02 03:51:53 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:51:56.740010 | orchestrator | 2026-01-02 03:51:56 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:51:56.740957 | orchestrator | 2026-01-02 03:51:56 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:51:56.740977 | orchestrator | 2026-01-02 03:51:56 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:51:59.785887 | orchestrator | 2026-01-02 03:51:59 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:51:59.787197 | orchestrator | 2026-01-02 03:51:59 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:51:59.787229 | orchestrator | 2026-01-02 03:51:59 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:52:02.833449 | orchestrator | 2026-01-02 03:52:02 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:52:02.835002 | orchestrator | 2026-01-02 03:52:02 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:52:02.835037 | orchestrator | 2026-01-02 03:52:02 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:52:05.881214 | orchestrator | 2026-01-02 03:52:05 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:52:05.883013 | orchestrator | 2026-01-02 03:52:05 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:52:05.883075 | orchestrator | 2026-01-02 03:52:05 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:52:08.930482 | orchestrator | 2026-01-02 03:52:08 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:52:08.932293 | orchestrator | 2026-01-02 03:52:08 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:52:08.932335 | orchestrator | 2026-01-02 03:52:08 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:52:11.980126 | orchestrator | 2026-01-02 03:52:11 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:52:11.983457 | orchestrator | 2026-01-02 03:52:11 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:52:11.983550 | orchestrator | 2026-01-02 03:52:11 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:52:15.030769 | orchestrator | 2026-01-02 03:52:15 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:52:15.032933 | orchestrator | 2026-01-02 03:52:15 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:52:15.033072 | orchestrator | 2026-01-02 03:52:15 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:52:18.080228 | orchestrator | 2026-01-02 03:52:18 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:52:18.081794 | orchestrator | 2026-01-02 03:52:18 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:52:18.081843 | orchestrator | 2026-01-02 03:52:18 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:52:21.127966 | orchestrator | 2026-01-02 03:52:21 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:52:21.129616 | orchestrator | 2026-01-02 03:52:21 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:52:21.129680 | orchestrator | 2026-01-02 03:52:21 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:52:24.169832 | orchestrator | 2026-01-02 03:52:24 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:52:24.172633 | orchestrator | 2026-01-02 03:52:24 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:52:24.172684 | orchestrator | 2026-01-02 03:52:24 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:52:27.215088 | orchestrator | 2026-01-02 03:52:27 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:52:27.217079 | orchestrator | 2026-01-02 03:52:27 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:52:27.217391 | orchestrator | 2026-01-02 03:52:27 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:52:30.259057 | orchestrator | 2026-01-02 03:52:30 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:52:30.261098 | orchestrator | 2026-01-02 03:52:30 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:52:30.261163 | orchestrator | 2026-01-02 03:52:30 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:52:33.306093 | orchestrator | 2026-01-02 03:52:33 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:52:33.307700 | orchestrator | 2026-01-02 03:52:33 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:52:33.307845 | orchestrator | 2026-01-02 03:52:33 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:52:36.352325 | orchestrator | 2026-01-02 03:52:36 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:52:36.353697 | orchestrator | 2026-01-02 03:52:36 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:52:36.353768 | orchestrator | 2026-01-02 03:52:36 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:52:39.398807 | orchestrator | 2026-01-02 03:52:39 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:52:39.400477 | orchestrator | 2026-01-02 03:52:39 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:52:39.400524 | orchestrator | 2026-01-02 03:52:39 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:52:42.442256 | orchestrator | 2026-01-02 03:52:42 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:52:42.442937 | orchestrator | 2026-01-02 03:52:42 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:52:42.442965 | orchestrator | 2026-01-02 03:52:42 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:52:45.491842 | orchestrator | 2026-01-02 03:52:45 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:52:45.493533 | orchestrator | 2026-01-02 03:52:45 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:52:45.493633 | orchestrator | 2026-01-02 03:52:45 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:52:48.542932 | orchestrator | 2026-01-02 03:52:48 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:52:48.544283 | orchestrator | 2026-01-02 03:52:48 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:52:48.544311 | orchestrator | 2026-01-02 03:52:48 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:52:51.594910 | orchestrator | 2026-01-02 03:52:51 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:52:51.596159 | orchestrator | 2026-01-02 03:52:51 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:52:51.596373 | orchestrator | 2026-01-02 03:52:51 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:52:54.640440 | orchestrator | 2026-01-02 03:52:54 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:52:54.641700 | orchestrator | 2026-01-02 03:52:54 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:52:54.641879 | orchestrator | 2026-01-02 03:52:54 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:52:57.685678 | orchestrator | 2026-01-02 03:52:57 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:52:57.687634 | orchestrator | 2026-01-02 03:52:57 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:52:57.687853 | orchestrator | 2026-01-02 03:52:57 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:53:00.732011 | orchestrator | 2026-01-02 03:53:00 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:53:00.733039 | orchestrator | 2026-01-02 03:53:00 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:53:00.733190 | orchestrator | 2026-01-02 03:53:00 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:53:03.782393 | orchestrator | 2026-01-02 03:53:03 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:53:03.784127 | orchestrator | 2026-01-02 03:53:03 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:53:03.784191 | orchestrator | 2026-01-02 03:53:03 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:53:06.830588 | orchestrator | 2026-01-02 03:53:06 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:53:06.832591 | orchestrator | 2026-01-02 03:53:06 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:53:06.832672 | orchestrator | 2026-01-02 03:53:06 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:53:09.876607 | orchestrator | 2026-01-02 03:53:09 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:53:09.879377 | orchestrator | 2026-01-02 03:53:09 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:53:09.879427 | orchestrator | 2026-01-02 03:53:09 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:53:12.923712 | orchestrator | 2026-01-02 03:53:12 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:53:12.925534 | orchestrator | 2026-01-02 03:53:12 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:53:12.925637 | orchestrator | 2026-01-02 03:53:12 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:53:15.974717 | orchestrator | 2026-01-02 03:53:15 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:53:15.976330 | orchestrator | 2026-01-02 03:53:15 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:53:15.976359 | orchestrator | 2026-01-02 03:53:15 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:53:19.021722 | orchestrator | 2026-01-02 03:53:19 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:53:19.024375 | orchestrator | 2026-01-02 03:53:19 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:53:19.024446 | orchestrator | 2026-01-02 03:53:19 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:53:22.069913 | orchestrator | 2026-01-02 03:53:22 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:53:22.070373 | orchestrator | 2026-01-02 03:53:22 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:53:22.070394 | orchestrator | 2026-01-02 03:53:22 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:53:25.114881 | orchestrator | 2026-01-02 03:53:25 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:53:25.116775 | orchestrator | 2026-01-02 03:53:25 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:53:25.116966 | orchestrator | 2026-01-02 03:53:25 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:53:28.164667 | orchestrator | 2026-01-02 03:53:28 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:53:28.166524 | orchestrator | 2026-01-02 03:53:28 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:53:28.170392 | orchestrator | 2026-01-02 03:53:28 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:53:31.209990 | orchestrator | 2026-01-02 03:53:31 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:53:31.210284 | orchestrator | 2026-01-02 03:53:31 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:53:31.210334 | orchestrator | 2026-01-02 03:53:31 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:53:34.254179 | orchestrator | 2026-01-02 03:53:34 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:53:34.254880 | orchestrator | 2026-01-02 03:53:34 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:53:34.254911 | orchestrator | 2026-01-02 03:53:34 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:53:37.303081 | orchestrator | 2026-01-02 03:53:37 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:53:37.304499 | orchestrator | 2026-01-02 03:53:37 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:53:37.304836 | orchestrator | 2026-01-02 03:53:37 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:53:40.348027 | orchestrator | 2026-01-02 03:53:40 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:53:40.349671 | orchestrator | 2026-01-02 03:53:40 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:53:40.349732 | orchestrator | 2026-01-02 03:53:40 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:53:43.397293 | orchestrator | 2026-01-02 03:53:43 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:53:43.399352 | orchestrator | 2026-01-02 03:53:43 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:53:43.399483 | orchestrator | 2026-01-02 03:53:43 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:53:46.445218 | orchestrator | 2026-01-02 03:53:46 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:53:46.447044 | orchestrator | 2026-01-02 03:53:46 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:53:46.447092 | orchestrator | 2026-01-02 03:53:46 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:53:49.494980 | orchestrator | 2026-01-02 03:53:49 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:53:49.496653 | orchestrator | 2026-01-02 03:53:49 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:53:49.496724 | orchestrator | 2026-01-02 03:53:49 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:53:52.543215 | orchestrator | 2026-01-02 03:53:52 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:53:52.543992 | orchestrator | 2026-01-02 03:53:52 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:53:52.544098 | orchestrator | 2026-01-02 03:53:52 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:53:55.583724 | orchestrator | 2026-01-02 03:53:55 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:53:55.585071 | orchestrator | 2026-01-02 03:53:55 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:53:55.585132 | orchestrator | 2026-01-02 03:53:55 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:53:58.629746 | orchestrator | 2026-01-02 03:53:58 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:53:58.631345 | orchestrator | 2026-01-02 03:53:58 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:53:58.755361 | orchestrator | 2026-01-02 03:53:58 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:54:01.676419 | orchestrator | 2026-01-02 03:54:01 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:54:01.677967 | orchestrator | 2026-01-02 03:54:01 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:54:01.678005 | orchestrator | 2026-01-02 03:54:01 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:54:04.726691 | orchestrator | 2026-01-02 03:54:04 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:54:04.728637 | orchestrator | 2026-01-02 03:54:04 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:54:04.728689 | orchestrator | 2026-01-02 03:54:04 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:54:07.777096 | orchestrator | 2026-01-02 03:54:07 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:54:07.780011 | orchestrator | 2026-01-02 03:54:07 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:54:07.780042 | orchestrator | 2026-01-02 03:54:07 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:54:10.829797 | orchestrator | 2026-01-02 03:54:10 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:54:10.832706 | orchestrator | 2026-01-02 03:54:10 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:54:10.832836 | orchestrator | 2026-01-02 03:54:10 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:54:13.876957 | orchestrator | 2026-01-02 03:54:13 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:54:13.879095 | orchestrator | 2026-01-02 03:54:13 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:54:13.879157 | orchestrator | 2026-01-02 03:54:13 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:54:16.919275 | orchestrator | 2026-01-02 03:54:16 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:54:16.920751 | orchestrator | 2026-01-02 03:54:16 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:54:16.920786 | orchestrator | 2026-01-02 03:54:16 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:54:19.965399 | orchestrator | 2026-01-02 03:54:19 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:54:19.967099 | orchestrator | 2026-01-02 03:54:19 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:54:19.967171 | orchestrator | 2026-01-02 03:54:19 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:54:23.023345 | orchestrator | 2026-01-02 03:54:23 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:54:23.024684 | orchestrator | 2026-01-02 03:54:23 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:54:23.024726 | orchestrator | 2026-01-02 03:54:23 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:54:26.071787 | orchestrator | 2026-01-02 03:54:26 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:54:26.073727 | orchestrator | 2026-01-02 03:54:26 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:54:26.073779 | orchestrator | 2026-01-02 03:54:26 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:54:29.119914 | orchestrator | 2026-01-02 03:54:29 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:54:29.121477 | orchestrator | 2026-01-02 03:54:29 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:54:29.121502 | orchestrator | 2026-01-02 03:54:29 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:54:32.164882 | orchestrator | 2026-01-02 03:54:32 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:54:32.165240 | orchestrator | 2026-01-02 03:54:32 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:54:32.165388 | orchestrator | 2026-01-02 03:54:32 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:54:35.205764 | orchestrator | 2026-01-02 03:54:35 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:54:35.208361 | orchestrator | 2026-01-02 03:54:35 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:54:35.208463 | orchestrator | 2026-01-02 03:54:35 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:54:38.256085 | orchestrator | 2026-01-02 03:54:38 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:54:38.257482 | orchestrator | 2026-01-02 03:54:38 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:54:38.257555 | orchestrator | 2026-01-02 03:54:38 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:54:41.300762 | orchestrator | 2026-01-02 03:54:41 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:54:41.303063 | orchestrator | 2026-01-02 03:54:41 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:54:41.303184 | orchestrator | 2026-01-02 03:54:41 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:54:44.346579 | orchestrator | 2026-01-02 03:54:44 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:54:44.348134 | orchestrator | 2026-01-02 03:54:44 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:54:44.348184 | orchestrator | 2026-01-02 03:54:44 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:54:47.393437 | orchestrator | 2026-01-02 03:54:47 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:54:47.395667 | orchestrator | 2026-01-02 03:54:47 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:54:47.395748 | orchestrator | 2026-01-02 03:54:47 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:54:50.441105 | orchestrator | 2026-01-02 03:54:50 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:54:50.442438 | orchestrator | 2026-01-02 03:54:50 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:54:50.442519 | orchestrator | 2026-01-02 03:54:50 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:54:53.491716 | orchestrator | 2026-01-02 03:54:53 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:54:53.492930 | orchestrator | 2026-01-02 03:54:53 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:54:53.492973 | orchestrator | 2026-01-02 03:54:53 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:54:56.534327 | orchestrator | 2026-01-02 03:54:56 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:54:56.535623 | orchestrator | 2026-01-02 03:54:56 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:54:56.535659 | orchestrator | 2026-01-02 03:54:56 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:54:59.580800 | orchestrator | 2026-01-02 03:54:59 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:54:59.584458 | orchestrator | 2026-01-02 03:54:59 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:54:59.584537 | orchestrator | 2026-01-02 03:54:59 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:55:02.627120 | orchestrator | 2026-01-02 03:55:02 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:55:02.631727 | orchestrator | 2026-01-02 03:55:02 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:55:02.632059 | orchestrator | 2026-01-02 03:55:02 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:55:05.672404 | orchestrator | 2026-01-02 03:55:05 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:55:05.672890 | orchestrator | 2026-01-02 03:55:05 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:55:05.673536 | orchestrator | 2026-01-02 03:55:05 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:55:08.717056 | orchestrator | 2026-01-02 03:55:08 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:55:08.719075 | orchestrator | 2026-01-02 03:55:08 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:55:08.719192 | orchestrator | 2026-01-02 03:55:08 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:55:11.758928 | orchestrator | 2026-01-02 03:55:11 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:55:11.759281 | orchestrator | 2026-01-02 03:55:11 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:55:11.759459 | orchestrator | 2026-01-02 03:55:11 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:55:14.807858 | orchestrator | 2026-01-02 03:55:14 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:55:14.809917 | orchestrator | 2026-01-02 03:55:14 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:55:14.809948 | orchestrator | 2026-01-02 03:55:14 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:55:17.856387 | orchestrator | 2026-01-02 03:55:17 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:55:17.858194 | orchestrator | 2026-01-02 03:55:17 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:55:17.858425 | orchestrator | 2026-01-02 03:55:17 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:55:20.903078 | orchestrator | 2026-01-02 03:55:20 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:55:20.905005 | orchestrator | 2026-01-02 03:55:20 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:55:20.905066 | orchestrator | 2026-01-02 03:55:20 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:55:23.955585 | orchestrator | 2026-01-02 03:55:23 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:55:23.957375 | orchestrator | 2026-01-02 03:55:23 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:55:23.957500 | orchestrator | 2026-01-02 03:55:23 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:55:27.010158 | orchestrator | 2026-01-02 03:55:27 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:55:27.011882 | orchestrator | 2026-01-02 03:55:27 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:55:27.011978 | orchestrator | 2026-01-02 03:55:27 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:55:30.061359 | orchestrator | 2026-01-02 03:55:30 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:55:30.063397 | orchestrator | 2026-01-02 03:55:30 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:55:30.063458 | orchestrator | 2026-01-02 03:55:30 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:55:33.105871 | orchestrator | 2026-01-02 03:55:33 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:55:33.107322 | orchestrator | 2026-01-02 03:55:33 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:55:33.107441 | orchestrator | 2026-01-02 03:55:33 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:55:36.156050 | orchestrator | 2026-01-02 03:55:36 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:55:36.158304 | orchestrator | 2026-01-02 03:55:36 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:55:36.158527 | orchestrator | 2026-01-02 03:55:36 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:55:39.215465 | orchestrator | 2026-01-02 03:55:39 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:55:39.217540 | orchestrator | 2026-01-02 03:55:39 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:55:39.217611 | orchestrator | 2026-01-02 03:55:39 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:55:42.262824 | orchestrator | 2026-01-02 03:55:42 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:55:42.264521 | orchestrator | 2026-01-02 03:55:42 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:55:42.264584 | orchestrator | 2026-01-02 03:55:42 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:55:45.309994 | orchestrator | 2026-01-02 03:55:45 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:55:45.312360 | orchestrator | 2026-01-02 03:55:45 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:55:45.312514 | orchestrator | 2026-01-02 03:55:45 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:55:48.359164 | orchestrator | 2026-01-02 03:55:48 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:55:48.360340 | orchestrator | 2026-01-02 03:55:48 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:55:48.360466 | orchestrator | 2026-01-02 03:55:48 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:55:51.399147 | orchestrator | 2026-01-02 03:55:51 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:55:51.400866 | orchestrator | 2026-01-02 03:55:51 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:55:51.400904 | orchestrator | 2026-01-02 03:55:51 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:55:54.446127 | orchestrator | 2026-01-02 03:55:54 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:55:54.447519 | orchestrator | 2026-01-02 03:55:54 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:55:54.447576 | orchestrator | 2026-01-02 03:55:54 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:55:57.495084 | orchestrator | 2026-01-02 03:55:57 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:55:57.496741 | orchestrator | 2026-01-02 03:55:57 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:55:57.496769 | orchestrator | 2026-01-02 03:55:57 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:56:00.540025 | orchestrator | 2026-01-02 03:56:00 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:56:00.541609 | orchestrator | 2026-01-02 03:56:00 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:56:00.541709 | orchestrator | 2026-01-02 03:56:00 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:56:03.591355 | orchestrator | 2026-01-02 03:56:03 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:56:03.593240 | orchestrator | 2026-01-02 03:56:03 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:56:03.593309 | orchestrator | 2026-01-02 03:56:03 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:56:06.640735 | orchestrator | 2026-01-02 03:56:06 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:56:06.643047 | orchestrator | 2026-01-02 03:56:06 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:56:06.643315 | orchestrator | 2026-01-02 03:56:06 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:56:09.691363 | orchestrator | 2026-01-02 03:56:09 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:56:09.693600 | orchestrator | 2026-01-02 03:56:09 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:56:09.693660 | orchestrator | 2026-01-02 03:56:09 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:56:12.736632 | orchestrator | 2026-01-02 03:56:12 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:56:12.737988 | orchestrator | 2026-01-02 03:56:12 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:56:12.738106 | orchestrator | 2026-01-02 03:56:12 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:56:15.784410 | orchestrator | 2026-01-02 03:56:15 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:56:15.785737 | orchestrator | 2026-01-02 03:56:15 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:56:15.785765 | orchestrator | 2026-01-02 03:56:15 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:56:18.833568 | orchestrator | 2026-01-02 03:56:18 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:56:18.835521 | orchestrator | 2026-01-02 03:56:18 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:56:18.836049 | orchestrator | 2026-01-02 03:56:18 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:56:21.884175 | orchestrator | 2026-01-02 03:56:21 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:56:21.886546 | orchestrator | 2026-01-02 03:56:21 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:56:21.886615 | orchestrator | 2026-01-02 03:56:21 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:56:24.937927 | orchestrator | 2026-01-02 03:56:24 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:56:24.940185 | orchestrator | 2026-01-02 03:56:24 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:56:24.940436 | orchestrator | 2026-01-02 03:56:24 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:56:27.992209 | orchestrator | 2026-01-02 03:56:27 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:56:27.993663 | orchestrator | 2026-01-02 03:56:27 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:56:27.993798 | orchestrator | 2026-01-02 03:56:27 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:56:31.038745 | orchestrator | 2026-01-02 03:56:31 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:56:31.039888 | orchestrator | 2026-01-02 03:56:31 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:56:31.040194 | orchestrator | 2026-01-02 03:56:31 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:56:34.087288 | orchestrator | 2026-01-02 03:56:34 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:56:34.089437 | orchestrator | 2026-01-02 03:56:34 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:56:34.089475 | orchestrator | 2026-01-02 03:56:34 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:56:37.135556 | orchestrator | 2026-01-02 03:56:37 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:56:37.137425 | orchestrator | 2026-01-02 03:56:37 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:56:37.137677 | orchestrator | 2026-01-02 03:56:37 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:56:40.187910 | orchestrator | 2026-01-02 03:56:40 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:56:40.190404 | orchestrator | 2026-01-02 03:56:40 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:56:40.190465 | orchestrator | 2026-01-02 03:56:40 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:56:43.234994 | orchestrator | 2026-01-02 03:56:43 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:56:43.236165 | orchestrator | 2026-01-02 03:56:43 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:56:43.236204 | orchestrator | 2026-01-02 03:56:43 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:56:46.282266 | orchestrator | 2026-01-02 03:56:46 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:56:46.283731 | orchestrator | 2026-01-02 03:56:46 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:56:46.335639 | orchestrator | 2026-01-02 03:56:46 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:56:49.324931 | orchestrator | 2026-01-02 03:56:49 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:56:49.327169 | orchestrator | 2026-01-02 03:56:49 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:56:49.327299 | orchestrator | 2026-01-02 03:56:49 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:56:52.370490 | orchestrator | 2026-01-02 03:56:52 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:56:52.372264 | orchestrator | 2026-01-02 03:56:52 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:56:52.372343 | orchestrator | 2026-01-02 03:56:52 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:56:55.418263 | orchestrator | 2026-01-02 03:56:55 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:56:55.420282 | orchestrator | 2026-01-02 03:56:55 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:56:55.420350 | orchestrator | 2026-01-02 03:56:55 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:56:58.457627 | orchestrator | 2026-01-02 03:56:58 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:56:58.459566 | orchestrator | 2026-01-02 03:56:58 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:56:58.459727 | orchestrator | 2026-01-02 03:56:58 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:57:01.501594 | orchestrator | 2026-01-02 03:57:01 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:57:01.503492 | orchestrator | 2026-01-02 03:57:01 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:57:01.503523 | orchestrator | 2026-01-02 03:57:01 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:57:04.547618 | orchestrator | 2026-01-02 03:57:04 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:57:04.549884 | orchestrator | 2026-01-02 03:57:04 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:57:04.549962 | orchestrator | 2026-01-02 03:57:04 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:57:07.594411 | orchestrator | 2026-01-02 03:57:07 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:57:07.595209 | orchestrator | 2026-01-02 03:57:07 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:57:07.595245 | orchestrator | 2026-01-02 03:57:07 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:57:10.641622 | orchestrator | 2026-01-02 03:57:10 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:57:10.642114 | orchestrator | 2026-01-02 03:57:10 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:57:10.642137 | orchestrator | 2026-01-02 03:57:10 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:57:13.685455 | orchestrator | 2026-01-02 03:57:13 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:57:13.686937 | orchestrator | 2026-01-02 03:57:13 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:57:13.686985 | orchestrator | 2026-01-02 03:57:13 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:57:16.729580 | orchestrator | 2026-01-02 03:57:16 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:57:16.731205 | orchestrator | 2026-01-02 03:57:16 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:57:16.731232 | orchestrator | 2026-01-02 03:57:16 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:57:19.777693 | orchestrator | 2026-01-02 03:57:19 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:57:19.779763 | orchestrator | 2026-01-02 03:57:19 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:57:19.779935 | orchestrator | 2026-01-02 03:57:19 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:57:22.827652 | orchestrator | 2026-01-02 03:57:22 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:57:22.829528 | orchestrator | 2026-01-02 03:57:22 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:57:22.829701 | orchestrator | 2026-01-02 03:57:22 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:57:25.879234 | orchestrator | 2026-01-02 03:57:25 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:57:25.881489 | orchestrator | 2026-01-02 03:57:25 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:57:25.881601 | orchestrator | 2026-01-02 03:57:25 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:57:28.925082 | orchestrator | 2026-01-02 03:57:28 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:57:28.927333 | orchestrator | 2026-01-02 03:57:28 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:57:28.927422 | orchestrator | 2026-01-02 03:57:28 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:57:31.973398 | orchestrator | 2026-01-02 03:57:31 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:57:31.974918 | orchestrator | 2026-01-02 03:57:31 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:57:31.975522 | orchestrator | 2026-01-02 03:57:31 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:57:35.018479 | orchestrator | 2026-01-02 03:57:35 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:57:35.019946 | orchestrator | 2026-01-02 03:57:35 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:57:35.019980 | orchestrator | 2026-01-02 03:57:35 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:57:38.065753 | orchestrator | 2026-01-02 03:57:38 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:57:38.067467 | orchestrator | 2026-01-02 03:57:38 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:57:38.067752 | orchestrator | 2026-01-02 03:57:38 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:57:41.110765 | orchestrator | 2026-01-02 03:57:41 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:57:41.112476 | orchestrator | 2026-01-02 03:57:41 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:57:41.112834 | orchestrator | 2026-01-02 03:57:41 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:57:44.157502 | orchestrator | 2026-01-02 03:57:44 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:57:44.159080 | orchestrator | 2026-01-02 03:57:44 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:57:44.159177 | orchestrator | 2026-01-02 03:57:44 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:57:47.203118 | orchestrator | 2026-01-02 03:57:47 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:57:47.204642 | orchestrator | 2026-01-02 03:57:47 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:57:47.204676 | orchestrator | 2026-01-02 03:57:47 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:57:50.251776 | orchestrator | 2026-01-02 03:57:50 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:57:50.253289 | orchestrator | 2026-01-02 03:57:50 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:57:50.253436 | orchestrator | 2026-01-02 03:57:50 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:57:53.302301 | orchestrator | 2026-01-02 03:57:53 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:57:53.303771 | orchestrator | 2026-01-02 03:57:53 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:57:53.303890 | orchestrator | 2026-01-02 03:57:53 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:57:56.355188 | orchestrator | 2026-01-02 03:57:56 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:57:56.356836 | orchestrator | 2026-01-02 03:57:56 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:57:56.356879 | orchestrator | 2026-01-02 03:57:56 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:57:59.400273 | orchestrator | 2026-01-02 03:57:59 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:57:59.400735 | orchestrator | 2026-01-02 03:57:59 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:57:59.401009 | orchestrator | 2026-01-02 03:57:59 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:58:02.443267 | orchestrator | 2026-01-02 03:58:02 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:58:02.445581 | orchestrator | 2026-01-02 03:58:02 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:58:02.445631 | orchestrator | 2026-01-02 03:58:02 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:58:05.481836 | orchestrator | 2026-01-02 03:58:05 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:58:05.482756 | orchestrator | 2026-01-02 03:58:05 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:58:05.482966 | orchestrator | 2026-01-02 03:58:05 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:58:08.532105 | orchestrator | 2026-01-02 03:58:08 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:58:08.533472 | orchestrator | 2026-01-02 03:58:08 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:58:08.533856 | orchestrator | 2026-01-02 03:58:08 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:58:11.588214 | orchestrator | 2026-01-02 03:58:11 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:58:11.589781 | orchestrator | 2026-01-02 03:58:11 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:58:11.589946 | orchestrator | 2026-01-02 03:58:11 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:58:14.637021 | orchestrator | 2026-01-02 03:58:14 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:58:14.641546 | orchestrator | 2026-01-02 03:58:14 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:58:14.641603 | orchestrator | 2026-01-02 03:58:14 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:58:17.685079 | orchestrator | 2026-01-02 03:58:17 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:58:17.687293 | orchestrator | 2026-01-02 03:58:17 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:58:17.687594 | orchestrator | 2026-01-02 03:58:17 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:58:20.736558 | orchestrator | 2026-01-02 03:58:20 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:58:20.741281 | orchestrator | 2026-01-02 03:58:20 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:58:20.741345 | orchestrator | 2026-01-02 03:58:20 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:58:23.783366 | orchestrator | 2026-01-02 03:58:23 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:58:23.784802 | orchestrator | 2026-01-02 03:58:23 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:58:23.784844 | orchestrator | 2026-01-02 03:58:23 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:58:26.832141 | orchestrator | 2026-01-02 03:58:26 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:58:26.833551 | orchestrator | 2026-01-02 03:58:26 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:58:26.833593 | orchestrator | 2026-01-02 03:58:26 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:58:29.874951 | orchestrator | 2026-01-02 03:58:29 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:58:29.876659 | orchestrator | 2026-01-02 03:58:29 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:58:29.876702 | orchestrator | 2026-01-02 03:58:29 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:58:32.926968 | orchestrator | 2026-01-02 03:58:32 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:58:32.929772 | orchestrator | 2026-01-02 03:58:32 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:58:32.929847 | orchestrator | 2026-01-02 03:58:32 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:58:35.977546 | orchestrator | 2026-01-02 03:58:35 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:58:35.979035 | orchestrator | 2026-01-02 03:58:35 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:58:35.979093 | orchestrator | 2026-01-02 03:58:35 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:58:39.019530 | orchestrator | 2026-01-02 03:58:39 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:58:39.021708 | orchestrator | 2026-01-02 03:58:39 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:58:39.021754 | orchestrator | 2026-01-02 03:58:39 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:58:42.066982 | orchestrator | 2026-01-02 03:58:42 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:58:42.067091 | orchestrator | 2026-01-02 03:58:42 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:58:42.067184 | orchestrator | 2026-01-02 03:58:42 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:58:45.105955 | orchestrator | 2026-01-02 03:58:45 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:58:45.108049 | orchestrator | 2026-01-02 03:58:45 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:58:45.108095 | orchestrator | 2026-01-02 03:58:45 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:58:48.151223 | orchestrator | 2026-01-02 03:58:48 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:58:48.152727 | orchestrator | 2026-01-02 03:58:48 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:58:48.152778 | orchestrator | 2026-01-02 03:58:48 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:58:51.202562 | orchestrator | 2026-01-02 03:58:51 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:58:51.204044 | orchestrator | 2026-01-02 03:58:51 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:58:51.204138 | orchestrator | 2026-01-02 03:58:51 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:58:54.248197 | orchestrator | 2026-01-02 03:58:54 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:58:54.249982 | orchestrator | 2026-01-02 03:58:54 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:58:54.250125 | orchestrator | 2026-01-02 03:58:54 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:58:57.299731 | orchestrator | 2026-01-02 03:58:57 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:58:57.301934 | orchestrator | 2026-01-02 03:58:57 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:58:57.302003 | orchestrator | 2026-01-02 03:58:57 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:59:00.342557 | orchestrator | 2026-01-02 03:59:00 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:59:00.344104 | orchestrator | 2026-01-02 03:59:00 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:59:00.344149 | orchestrator | 2026-01-02 03:59:00 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:59:03.387448 | orchestrator | 2026-01-02 03:59:03 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:59:03.389222 | orchestrator | 2026-01-02 03:59:03 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:59:03.389273 | orchestrator | 2026-01-02 03:59:03 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:59:06.431123 | orchestrator | 2026-01-02 03:59:06 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:59:06.433086 | orchestrator | 2026-01-02 03:59:06 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:59:06.433637 | orchestrator | 2026-01-02 03:59:06 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:59:09.479472 | orchestrator | 2026-01-02 03:59:09 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:59:09.481653 | orchestrator | 2026-01-02 03:59:09 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:59:09.481802 | orchestrator | 2026-01-02 03:59:09 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:59:12.522972 | orchestrator | 2026-01-02 03:59:12 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:59:12.524327 | orchestrator | 2026-01-02 03:59:12 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:59:12.524589 | orchestrator | 2026-01-02 03:59:12 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:59:15.567548 | orchestrator | 2026-01-02 03:59:15 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:59:15.569736 | orchestrator | 2026-01-02 03:59:15 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:59:15.569773 | orchestrator | 2026-01-02 03:59:15 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:59:18.617022 | orchestrator | 2026-01-02 03:59:18 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:59:18.619909 | orchestrator | 2026-01-02 03:59:18 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:59:18.619981 | orchestrator | 2026-01-02 03:59:18 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:59:21.672518 | orchestrator | 2026-01-02 03:59:21 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:59:21.674986 | orchestrator | 2026-01-02 03:59:21 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:59:21.675040 | orchestrator | 2026-01-02 03:59:21 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:59:24.724063 | orchestrator | 2026-01-02 03:59:24 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:59:24.725993 | orchestrator | 2026-01-02 03:59:24 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:59:24.726145 | orchestrator | 2026-01-02 03:59:24 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:59:27.781347 | orchestrator | 2026-01-02 03:59:27 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:59:27.783442 | orchestrator | 2026-01-02 03:59:27 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:59:27.783493 | orchestrator | 2026-01-02 03:59:27 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:59:30.827008 | orchestrator | 2026-01-02 03:59:30 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:59:30.830101 | orchestrator | 2026-01-02 03:59:30 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:59:30.830222 | orchestrator | 2026-01-02 03:59:30 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:59:33.873097 | orchestrator | 2026-01-02 03:59:33 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:59:33.873269 | orchestrator | 2026-01-02 03:59:33 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:59:33.873290 | orchestrator | 2026-01-02 03:59:33 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:59:36.918697 | orchestrator | 2026-01-02 03:59:36 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:59:36.920407 | orchestrator | 2026-01-02 03:59:36 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:59:36.920480 | orchestrator | 2026-01-02 03:59:36 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:59:39.982339 | orchestrator | 2026-01-02 03:59:39 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:59:39.984430 | orchestrator | 2026-01-02 03:59:39 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:59:39.984504 | orchestrator | 2026-01-02 03:59:39 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:59:43.050257 | orchestrator | 2026-01-02 03:59:43 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:59:43.051993 | orchestrator | 2026-01-02 03:59:43 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:59:43.052085 | orchestrator | 2026-01-02 03:59:43 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:59:46.101366 | orchestrator | 2026-01-02 03:59:46 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:59:46.103222 | orchestrator | 2026-01-02 03:59:46 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:59:46.103302 | orchestrator | 2026-01-02 03:59:46 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:59:49.146429 | orchestrator | 2026-01-02 03:59:49 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:59:49.149463 | orchestrator | 2026-01-02 03:59:49 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:59:49.149515 | orchestrator | 2026-01-02 03:59:49 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:59:52.190637 | orchestrator | 2026-01-02 03:59:52 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:59:52.191064 | orchestrator | 2026-01-02 03:59:52 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:59:52.191094 | orchestrator | 2026-01-02 03:59:52 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:59:55.233149 | orchestrator | 2026-01-02 03:59:55 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:59:55.233979 | orchestrator | 2026-01-02 03:59:55 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:59:55.234134 | orchestrator | 2026-01-02 03:59:55 | INFO  | Wait 1 second(s) until the next check 2026-01-02 03:59:58.277817 | orchestrator | 2026-01-02 03:59:58 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 03:59:58.279225 | orchestrator | 2026-01-02 03:59:58 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 03:59:58.279308 | orchestrator | 2026-01-02 03:59:58 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:00:01.328068 | orchestrator | 2026-01-02 04:00:01 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:00:01.330477 | orchestrator | 2026-01-02 04:00:01 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:00:01.330570 | orchestrator | 2026-01-02 04:00:01 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:00:04.377044 | orchestrator | 2026-01-02 04:00:04 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:00:04.378265 | orchestrator | 2026-01-02 04:00:04 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:00:04.378677 | orchestrator | 2026-01-02 04:00:04 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:00:07.422457 | orchestrator | 2026-01-02 04:00:07 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:00:07.423777 | orchestrator | 2026-01-02 04:00:07 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:00:07.423827 | orchestrator | 2026-01-02 04:00:07 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:00:10.468952 | orchestrator | 2026-01-02 04:00:10 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:00:10.469651 | orchestrator | 2026-01-02 04:00:10 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:00:10.470180 | orchestrator | 2026-01-02 04:00:10 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:00:13.514309 | orchestrator | 2026-01-02 04:00:13 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:00:13.516698 | orchestrator | 2026-01-02 04:00:13 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:00:13.516846 | orchestrator | 2026-01-02 04:00:13 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:00:16.565003 | orchestrator | 2026-01-02 04:00:16 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:00:16.566549 | orchestrator | 2026-01-02 04:00:16 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:00:16.566584 | orchestrator | 2026-01-02 04:00:16 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:00:19.614158 | orchestrator | 2026-01-02 04:00:19 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:00:19.615857 | orchestrator | 2026-01-02 04:00:19 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:00:19.615925 | orchestrator | 2026-01-02 04:00:19 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:00:22.659239 | orchestrator | 2026-01-02 04:00:22 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:00:22.661106 | orchestrator | 2026-01-02 04:00:22 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:00:22.661258 | orchestrator | 2026-01-02 04:00:22 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:00:25.706250 | orchestrator | 2026-01-02 04:00:25 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:00:25.707868 | orchestrator | 2026-01-02 04:00:25 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:00:25.707950 | orchestrator | 2026-01-02 04:00:25 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:00:28.752586 | orchestrator | 2026-01-02 04:00:28 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:00:28.754947 | orchestrator | 2026-01-02 04:00:28 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:00:28.754999 | orchestrator | 2026-01-02 04:00:28 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:00:31.798811 | orchestrator | 2026-01-02 04:00:31 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:00:31.800812 | orchestrator | 2026-01-02 04:00:31 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:00:31.800874 | orchestrator | 2026-01-02 04:00:31 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:00:34.849979 | orchestrator | 2026-01-02 04:00:34 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:00:34.850832 | orchestrator | 2026-01-02 04:00:34 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:00:34.850888 | orchestrator | 2026-01-02 04:00:34 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:00:37.892546 | orchestrator | 2026-01-02 04:00:37 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:00:37.893939 | orchestrator | 2026-01-02 04:00:37 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:00:37.894008 | orchestrator | 2026-01-02 04:00:37 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:00:40.939125 | orchestrator | 2026-01-02 04:00:40 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:00:40.940039 | orchestrator | 2026-01-02 04:00:40 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:00:40.940085 | orchestrator | 2026-01-02 04:00:40 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:00:43.982838 | orchestrator | 2026-01-02 04:00:43 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:00:43.985246 | orchestrator | 2026-01-02 04:00:43 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:00:43.985349 | orchestrator | 2026-01-02 04:00:43 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:00:47.023273 | orchestrator | 2026-01-02 04:00:47 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:00:47.025552 | orchestrator | 2026-01-02 04:00:47 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:00:47.025646 | orchestrator | 2026-01-02 04:00:47 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:00:50.075521 | orchestrator | 2026-01-02 04:00:50 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:00:50.077490 | orchestrator | 2026-01-02 04:00:50 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:00:50.077551 | orchestrator | 2026-01-02 04:00:50 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:00:53.114180 | orchestrator | 2026-01-02 04:00:53 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:00:53.116550 | orchestrator | 2026-01-02 04:00:53 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:00:53.116610 | orchestrator | 2026-01-02 04:00:53 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:00:56.161463 | orchestrator | 2026-01-02 04:00:56 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:00:56.163853 | orchestrator | 2026-01-02 04:00:56 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:00:56.163910 | orchestrator | 2026-01-02 04:00:56 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:00:59.211185 | orchestrator | 2026-01-02 04:00:59 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:00:59.213321 | orchestrator | 2026-01-02 04:00:59 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:00:59.213442 | orchestrator | 2026-01-02 04:00:59 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:01:02.264075 | orchestrator | 2026-01-02 04:01:02 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:01:02.265345 | orchestrator | 2026-01-02 04:01:02 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:01:02.265433 | orchestrator | 2026-01-02 04:01:02 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:01:05.307221 | orchestrator | 2026-01-02 04:01:05 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:01:05.308123 | orchestrator | 2026-01-02 04:01:05 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:01:05.308162 | orchestrator | 2026-01-02 04:01:05 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:01:08.353269 | orchestrator | 2026-01-02 04:01:08 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:01:08.355881 | orchestrator | 2026-01-02 04:01:08 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:01:08.355927 | orchestrator | 2026-01-02 04:01:08 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:01:11.399668 | orchestrator | 2026-01-02 04:01:11 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:01:11.401755 | orchestrator | 2026-01-02 04:01:11 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:01:11.401783 | orchestrator | 2026-01-02 04:01:11 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:01:14.445417 | orchestrator | 2026-01-02 04:01:14 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:01:14.446691 | orchestrator | 2026-01-02 04:01:14 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:01:14.446741 | orchestrator | 2026-01-02 04:01:14 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:01:17.492117 | orchestrator | 2026-01-02 04:01:17 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:01:17.493656 | orchestrator | 2026-01-02 04:01:17 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:01:17.493706 | orchestrator | 2026-01-02 04:01:17 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:01:20.537665 | orchestrator | 2026-01-02 04:01:20 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:01:20.538278 | orchestrator | 2026-01-02 04:01:20 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:01:20.538309 | orchestrator | 2026-01-02 04:01:20 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:01:23.582318 | orchestrator | 2026-01-02 04:01:23 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:01:23.583831 | orchestrator | 2026-01-02 04:01:23 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:01:23.583968 | orchestrator | 2026-01-02 04:01:23 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:01:26.629275 | orchestrator | 2026-01-02 04:01:26 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:01:26.630809 | orchestrator | 2026-01-02 04:01:26 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:01:26.630860 | orchestrator | 2026-01-02 04:01:26 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:01:29.676482 | orchestrator | 2026-01-02 04:01:29 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:01:29.678102 | orchestrator | 2026-01-02 04:01:29 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:01:29.678232 | orchestrator | 2026-01-02 04:01:29 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:01:32.728409 | orchestrator | 2026-01-02 04:01:32 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:01:32.729571 | orchestrator | 2026-01-02 04:01:32 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:01:32.729758 | orchestrator | 2026-01-02 04:01:32 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:01:35.775846 | orchestrator | 2026-01-02 04:01:35 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:01:35.778395 | orchestrator | 2026-01-02 04:01:35 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:01:35.778446 | orchestrator | 2026-01-02 04:01:35 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:01:38.825656 | orchestrator | 2026-01-02 04:01:38 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:01:38.827124 | orchestrator | 2026-01-02 04:01:38 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:01:38.827439 | orchestrator | 2026-01-02 04:01:38 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:01:41.863128 | orchestrator | 2026-01-02 04:01:41 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:01:41.865553 | orchestrator | 2026-01-02 04:01:41 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:01:41.865667 | orchestrator | 2026-01-02 04:01:41 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:01:44.912002 | orchestrator | 2026-01-02 04:01:44 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:01:44.913719 | orchestrator | 2026-01-02 04:01:44 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:01:44.913766 | orchestrator | 2026-01-02 04:01:44 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:01:47.960658 | orchestrator | 2026-01-02 04:01:47 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:01:47.962765 | orchestrator | 2026-01-02 04:01:47 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:01:47.962888 | orchestrator | 2026-01-02 04:01:47 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:01:51.009123 | orchestrator | 2026-01-02 04:01:51 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:01:51.011227 | orchestrator | 2026-01-02 04:01:51 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:01:51.011340 | orchestrator | 2026-01-02 04:01:51 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:01:54.052278 | orchestrator | 2026-01-02 04:01:54 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:01:54.053465 | orchestrator | 2026-01-02 04:01:54 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:01:54.053637 | orchestrator | 2026-01-02 04:01:54 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:01:57.102179 | orchestrator | 2026-01-02 04:01:57 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:01:57.103840 | orchestrator | 2026-01-02 04:01:57 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:01:57.103933 | orchestrator | 2026-01-02 04:01:57 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:02:00.153689 | orchestrator | 2026-01-02 04:02:00 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:02:00.156604 | orchestrator | 2026-01-02 04:02:00 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:02:00.156664 | orchestrator | 2026-01-02 04:02:00 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:02:03.202892 | orchestrator | 2026-01-02 04:02:03 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:02:03.203636 | orchestrator | 2026-01-02 04:02:03 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:02:03.203700 | orchestrator | 2026-01-02 04:02:03 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:02:06.253824 | orchestrator | 2026-01-02 04:02:06 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:02:06.255634 | orchestrator | 2026-01-02 04:02:06 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:02:06.255815 | orchestrator | 2026-01-02 04:02:06 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:02:09.305945 | orchestrator | 2026-01-02 04:02:09 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:02:09.306944 | orchestrator | 2026-01-02 04:02:09 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:02:09.307040 | orchestrator | 2026-01-02 04:02:09 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:02:12.354851 | orchestrator | 2026-01-02 04:02:12 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:02:12.356707 | orchestrator | 2026-01-02 04:02:12 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:02:12.356761 | orchestrator | 2026-01-02 04:02:12 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:02:15.408480 | orchestrator | 2026-01-02 04:02:15 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:02:15.409876 | orchestrator | 2026-01-02 04:02:15 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:02:15.410161 | orchestrator | 2026-01-02 04:02:15 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:02:18.462307 | orchestrator | 2026-01-02 04:02:18 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:02:18.463629 | orchestrator | 2026-01-02 04:02:18 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:02:18.463687 | orchestrator | 2026-01-02 04:02:18 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:02:21.504867 | orchestrator | 2026-01-02 04:02:21 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:02:21.507704 | orchestrator | 2026-01-02 04:02:21 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:02:21.507782 | orchestrator | 2026-01-02 04:02:21 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:02:24.551167 | orchestrator | 2026-01-02 04:02:24 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:02:24.552745 | orchestrator | 2026-01-02 04:02:24 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:02:24.552787 | orchestrator | 2026-01-02 04:02:24 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:02:27.604367 | orchestrator | 2026-01-02 04:02:27 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:02:27.606827 | orchestrator | 2026-01-02 04:02:27 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:02:27.606900 | orchestrator | 2026-01-02 04:02:27 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:02:30.657062 | orchestrator | 2026-01-02 04:02:30 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:02:30.659985 | orchestrator | 2026-01-02 04:02:30 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:02:30.660037 | orchestrator | 2026-01-02 04:02:30 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:02:33.713925 | orchestrator | 2026-01-02 04:02:33 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:02:33.716032 | orchestrator | 2026-01-02 04:02:33 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:02:33.716080 | orchestrator | 2026-01-02 04:02:33 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:02:36.763014 | orchestrator | 2026-01-02 04:02:36 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:02:36.765315 | orchestrator | 2026-01-02 04:02:36 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:02:36.765398 | orchestrator | 2026-01-02 04:02:36 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:02:39.813862 | orchestrator | 2026-01-02 04:02:39 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:02:39.817595 | orchestrator | 2026-01-02 04:02:39 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:02:39.817656 | orchestrator | 2026-01-02 04:02:39 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:02:42.865139 | orchestrator | 2026-01-02 04:02:42 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:02:42.867925 | orchestrator | 2026-01-02 04:02:42 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:02:42.867997 | orchestrator | 2026-01-02 04:02:42 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:02:45.916848 | orchestrator | 2026-01-02 04:02:45 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:02:45.919088 | orchestrator | 2026-01-02 04:02:45 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:02:45.919166 | orchestrator | 2026-01-02 04:02:45 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:02:48.972982 | orchestrator | 2026-01-02 04:02:48 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:02:48.975333 | orchestrator | 2026-01-02 04:02:48 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:02:48.975399 | orchestrator | 2026-01-02 04:02:48 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:02:52.021917 | orchestrator | 2026-01-02 04:02:52 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:02:52.022215 | orchestrator | 2026-01-02 04:02:52 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:02:52.022249 | orchestrator | 2026-01-02 04:02:52 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:02:55.068051 | orchestrator | 2026-01-02 04:02:55 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:02:55.069405 | orchestrator | 2026-01-02 04:02:55 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:02:55.069443 | orchestrator | 2026-01-02 04:02:55 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:02:58.117753 | orchestrator | 2026-01-02 04:02:58 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:02:58.119240 | orchestrator | 2026-01-02 04:02:58 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:02:58.119325 | orchestrator | 2026-01-02 04:02:58 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:03:01.158134 | orchestrator | 2026-01-02 04:03:01 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:03:01.159222 | orchestrator | 2026-01-02 04:03:01 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:03:01.159311 | orchestrator | 2026-01-02 04:03:01 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:03:04.203709 | orchestrator | 2026-01-02 04:03:04 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:03:04.205877 | orchestrator | 2026-01-02 04:03:04 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:03:04.206376 | orchestrator | 2026-01-02 04:03:04 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:03:07.253217 | orchestrator | 2026-01-02 04:03:07 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:03:07.254983 | orchestrator | 2026-01-02 04:03:07 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:03:07.255299 | orchestrator | 2026-01-02 04:03:07 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:03:10.303683 | orchestrator | 2026-01-02 04:03:10 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:03:10.305635 | orchestrator | 2026-01-02 04:03:10 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:03:10.305676 | orchestrator | 2026-01-02 04:03:10 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:03:13.354759 | orchestrator | 2026-01-02 04:03:13 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:03:13.357141 | orchestrator | 2026-01-02 04:03:13 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:03:13.357186 | orchestrator | 2026-01-02 04:03:13 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:03:16.403197 | orchestrator | 2026-01-02 04:03:16 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:03:16.405059 | orchestrator | 2026-01-02 04:03:16 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:03:16.405149 | orchestrator | 2026-01-02 04:03:16 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:03:19.454947 | orchestrator | 2026-01-02 04:03:19 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:03:19.456103 | orchestrator | 2026-01-02 04:03:19 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:03:19.456245 | orchestrator | 2026-01-02 04:03:19 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:03:22.497837 | orchestrator | 2026-01-02 04:03:22 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:03:22.499517 | orchestrator | 2026-01-02 04:03:22 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:03:22.499586 | orchestrator | 2026-01-02 04:03:22 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:03:25.542153 | orchestrator | 2026-01-02 04:03:25 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:03:25.543581 | orchestrator | 2026-01-02 04:03:25 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:03:25.543611 | orchestrator | 2026-01-02 04:03:25 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:03:28.587765 | orchestrator | 2026-01-02 04:03:28 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:03:28.589860 | orchestrator | 2026-01-02 04:03:28 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:03:28.589953 | orchestrator | 2026-01-02 04:03:28 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:03:31.638558 | orchestrator | 2026-01-02 04:03:31 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:03:31.640790 | orchestrator | 2026-01-02 04:03:31 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:03:31.640821 | orchestrator | 2026-01-02 04:03:31 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:03:34.687126 | orchestrator | 2026-01-02 04:03:34 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:03:34.688246 | orchestrator | 2026-01-02 04:03:34 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:03:34.688307 | orchestrator | 2026-01-02 04:03:34 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:03:37.734997 | orchestrator | 2026-01-02 04:03:37 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:03:37.736967 | orchestrator | 2026-01-02 04:03:37 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:03:37.737229 | orchestrator | 2026-01-02 04:03:37 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:03:40.775835 | orchestrator | 2026-01-02 04:03:40 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:03:40.776606 | orchestrator | 2026-01-02 04:03:40 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:03:40.776808 | orchestrator | 2026-01-02 04:03:40 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:03:43.822828 | orchestrator | 2026-01-02 04:03:43 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:03:43.824170 | orchestrator | 2026-01-02 04:03:43 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:03:43.824286 | orchestrator | 2026-01-02 04:03:43 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:03:46.866398 | orchestrator | 2026-01-02 04:03:46 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:03:46.867716 | orchestrator | 2026-01-02 04:03:46 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:03:46.867789 | orchestrator | 2026-01-02 04:03:46 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:03:49.912257 | orchestrator | 2026-01-02 04:03:49 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:03:49.913780 | orchestrator | 2026-01-02 04:03:49 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:03:49.913819 | orchestrator | 2026-01-02 04:03:49 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:03:52.959013 | orchestrator | 2026-01-02 04:03:52 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:03:52.960873 | orchestrator | 2026-01-02 04:03:52 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:03:52.960930 | orchestrator | 2026-01-02 04:03:52 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:03:56.011317 | orchestrator | 2026-01-02 04:03:56 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:03:56.013505 | orchestrator | 2026-01-02 04:03:56 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:03:56.013545 | orchestrator | 2026-01-02 04:03:56 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:03:59.060639 | orchestrator | 2026-01-02 04:03:59 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:03:59.061215 | orchestrator | 2026-01-02 04:03:59 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:03:59.061232 | orchestrator | 2026-01-02 04:03:59 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:04:02.107533 | orchestrator | 2026-01-02 04:04:02 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:04:02.108250 | orchestrator | 2026-01-02 04:04:02 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:04:02.108589 | orchestrator | 2026-01-02 04:04:02 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:04:05.158409 | orchestrator | 2026-01-02 04:04:05 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:04:05.160495 | orchestrator | 2026-01-02 04:04:05 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:04:05.160614 | orchestrator | 2026-01-02 04:04:05 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:04:08.206196 | orchestrator | 2026-01-02 04:04:08 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:04:08.207870 | orchestrator | 2026-01-02 04:04:08 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:04:08.207918 | orchestrator | 2026-01-02 04:04:08 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:04:11.251235 | orchestrator | 2026-01-02 04:04:11 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:04:11.252606 | orchestrator | 2026-01-02 04:04:11 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:04:11.252666 | orchestrator | 2026-01-02 04:04:11 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:04:14.301646 | orchestrator | 2026-01-02 04:04:14 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:04:14.303033 | orchestrator | 2026-01-02 04:04:14 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:04:14.303080 | orchestrator | 2026-01-02 04:04:14 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:04:17.342943 | orchestrator | 2026-01-02 04:04:17 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:04:17.344726 | orchestrator | 2026-01-02 04:04:17 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:04:17.345171 | orchestrator | 2026-01-02 04:04:17 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:04:20.391290 | orchestrator | 2026-01-02 04:04:20 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:04:20.391969 | orchestrator | 2026-01-02 04:04:20 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:04:20.392744 | orchestrator | 2026-01-02 04:04:20 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:04:23.440238 | orchestrator | 2026-01-02 04:04:23 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:04:23.442555 | orchestrator | 2026-01-02 04:04:23 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:04:23.442684 | orchestrator | 2026-01-02 04:04:23 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:04:26.492976 | orchestrator | 2026-01-02 04:04:26 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:04:26.495107 | orchestrator | 2026-01-02 04:04:26 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:04:26.495180 | orchestrator | 2026-01-02 04:04:26 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:04:29.540596 | orchestrator | 2026-01-02 04:04:29 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:04:29.541638 | orchestrator | 2026-01-02 04:04:29 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:04:29.541918 | orchestrator | 2026-01-02 04:04:29 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:04:32.588525 | orchestrator | 2026-01-02 04:04:32 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:04:32.590589 | orchestrator | 2026-01-02 04:04:32 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:04:32.590670 | orchestrator | 2026-01-02 04:04:32 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:04:35.640861 | orchestrator | 2026-01-02 04:04:35 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:04:35.642872 | orchestrator | 2026-01-02 04:04:35 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:04:35.643160 | orchestrator | 2026-01-02 04:04:35 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:04:38.697067 | orchestrator | 2026-01-02 04:04:38 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:04:38.699105 | orchestrator | 2026-01-02 04:04:38 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:04:38.699146 | orchestrator | 2026-01-02 04:04:38 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:04:41.740953 | orchestrator | 2026-01-02 04:04:41 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:04:41.742245 | orchestrator | 2026-01-02 04:04:41 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:04:41.742356 | orchestrator | 2026-01-02 04:04:41 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:04:44.788922 | orchestrator | 2026-01-02 04:04:44 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:04:44.790634 | orchestrator | 2026-01-02 04:04:44 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:04:44.790694 | orchestrator | 2026-01-02 04:04:44 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:04:47.835956 | orchestrator | 2026-01-02 04:04:47 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:04:47.837670 | orchestrator | 2026-01-02 04:04:47 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:04:47.837819 | orchestrator | 2026-01-02 04:04:47 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:04:50.886006 | orchestrator | 2026-01-02 04:04:50 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:04:50.888142 | orchestrator | 2026-01-02 04:04:50 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:04:50.888182 | orchestrator | 2026-01-02 04:04:50 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:04:53.931589 | orchestrator | 2026-01-02 04:04:53 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:04:53.933262 | orchestrator | 2026-01-02 04:04:53 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:04:53.933304 | orchestrator | 2026-01-02 04:04:53 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:04:56.984211 | orchestrator | 2026-01-02 04:04:56 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:04:56.985579 | orchestrator | 2026-01-02 04:04:56 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:04:56.985624 | orchestrator | 2026-01-02 04:04:56 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:05:00.029543 | orchestrator | 2026-01-02 04:05:00 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:05:00.031450 | orchestrator | 2026-01-02 04:05:00 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:05:00.031514 | orchestrator | 2026-01-02 04:05:00 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:05:03.080103 | orchestrator | 2026-01-02 04:05:03 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:05:03.081622 | orchestrator | 2026-01-02 04:05:03 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:05:03.081670 | orchestrator | 2026-01-02 04:05:03 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:05:06.131713 | orchestrator | 2026-01-02 04:05:06 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:05:06.132836 | orchestrator | 2026-01-02 04:05:06 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:05:06.133464 | orchestrator | 2026-01-02 04:05:06 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:05:09.175704 | orchestrator | 2026-01-02 04:05:09 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:05:09.176906 | orchestrator | 2026-01-02 04:05:09 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:05:09.176940 | orchestrator | 2026-01-02 04:05:09 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:05:12.226185 | orchestrator | 2026-01-02 04:05:12 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:05:12.227177 | orchestrator | 2026-01-02 04:05:12 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:05:12.227207 | orchestrator | 2026-01-02 04:05:12 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:05:15.272607 | orchestrator | 2026-01-02 04:05:15 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:05:15.274163 | orchestrator | 2026-01-02 04:05:15 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:05:15.274210 | orchestrator | 2026-01-02 04:05:15 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:05:18.316441 | orchestrator | 2026-01-02 04:05:18 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:05:18.317883 | orchestrator | 2026-01-02 04:05:18 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:05:18.318090 | orchestrator | 2026-01-02 04:05:18 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:05:21.359988 | orchestrator | 2026-01-02 04:05:21 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:05:21.362451 | orchestrator | 2026-01-02 04:05:21 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:05:21.362494 | orchestrator | 2026-01-02 04:05:21 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:05:24.418254 | orchestrator | 2026-01-02 04:05:24 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:05:24.419840 | orchestrator | 2026-01-02 04:05:24 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:05:24.419859 | orchestrator | 2026-01-02 04:05:24 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:05:27.463506 | orchestrator | 2026-01-02 04:05:27 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:05:27.465734 | orchestrator | 2026-01-02 04:05:27 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:05:27.465831 | orchestrator | 2026-01-02 04:05:27 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:05:30.510448 | orchestrator | 2026-01-02 04:05:30 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:05:30.512531 | orchestrator | 2026-01-02 04:05:30 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:05:30.512610 | orchestrator | 2026-01-02 04:05:30 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:05:33.557886 | orchestrator | 2026-01-02 04:05:33 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:05:33.559626 | orchestrator | 2026-01-02 04:05:33 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:05:33.559673 | orchestrator | 2026-01-02 04:05:33 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:05:36.609221 | orchestrator | 2026-01-02 04:05:36 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:05:36.611631 | orchestrator | 2026-01-02 04:05:36 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:05:36.611686 | orchestrator | 2026-01-02 04:05:36 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:05:39.655624 | orchestrator | 2026-01-02 04:05:39 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:05:39.657347 | orchestrator | 2026-01-02 04:05:39 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:05:39.657430 | orchestrator | 2026-01-02 04:05:39 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:05:42.705320 | orchestrator | 2026-01-02 04:05:42 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:05:42.706923 | orchestrator | 2026-01-02 04:05:42 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:05:42.706965 | orchestrator | 2026-01-02 04:05:42 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:05:45.749923 | orchestrator | 2026-01-02 04:05:45 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:05:45.751186 | orchestrator | 2026-01-02 04:05:45 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:05:45.751225 | orchestrator | 2026-01-02 04:05:45 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:05:48.797056 | orchestrator | 2026-01-02 04:05:48 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:05:48.799101 | orchestrator | 2026-01-02 04:05:48 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:05:48.799693 | orchestrator | 2026-01-02 04:05:48 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:05:51.841962 | orchestrator | 2026-01-02 04:05:51 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:05:51.843189 | orchestrator | 2026-01-02 04:05:51 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:05:51.843254 | orchestrator | 2026-01-02 04:05:51 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:05:54.888247 | orchestrator | 2026-01-02 04:05:54 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:05:54.890119 | orchestrator | 2026-01-02 04:05:54 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:05:54.890288 | orchestrator | 2026-01-02 04:05:54 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:05:57.938526 | orchestrator | 2026-01-02 04:05:57 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:05:57.939952 | orchestrator | 2026-01-02 04:05:57 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:05:57.940140 | orchestrator | 2026-01-02 04:05:57 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:06:00.991468 | orchestrator | 2026-01-02 04:06:00 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:06:00.992899 | orchestrator | 2026-01-02 04:06:00 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:06:00.993059 | orchestrator | 2026-01-02 04:06:00 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:06:04.048538 | orchestrator | 2026-01-02 04:06:04 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:06:04.049099 | orchestrator | 2026-01-02 04:06:04 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:06:04.049122 | orchestrator | 2026-01-02 04:06:04 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:06:07.097957 | orchestrator | 2026-01-02 04:06:07 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:06:07.099184 | orchestrator | 2026-01-02 04:06:07 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:06:07.099221 | orchestrator | 2026-01-02 04:06:07 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:06:10.142390 | orchestrator | 2026-01-02 04:06:10 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:06:10.142600 | orchestrator | 2026-01-02 04:06:10 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:06:10.142624 | orchestrator | 2026-01-02 04:06:10 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:06:13.184494 | orchestrator | 2026-01-02 04:06:13 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:06:13.186663 | orchestrator | 2026-01-02 04:06:13 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:06:13.186807 | orchestrator | 2026-01-02 04:06:13 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:06:16.233287 | orchestrator | 2026-01-02 04:06:16 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:06:16.235623 | orchestrator | 2026-01-02 04:06:16 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:06:16.235671 | orchestrator | 2026-01-02 04:06:16 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:06:19.291578 | orchestrator | 2026-01-02 04:06:19 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:06:19.293758 | orchestrator | 2026-01-02 04:06:19 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:06:19.293847 | orchestrator | 2026-01-02 04:06:19 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:06:22.341036 | orchestrator | 2026-01-02 04:06:22 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:06:22.342936 | orchestrator | 2026-01-02 04:06:22 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:06:22.343011 | orchestrator | 2026-01-02 04:06:22 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:06:25.382422 | orchestrator | 2026-01-02 04:06:25 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:06:25.383069 | orchestrator | 2026-01-02 04:06:25 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:06:25.383128 | orchestrator | 2026-01-02 04:06:25 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:06:28.436674 | orchestrator | 2026-01-02 04:06:28 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:06:28.438576 | orchestrator | 2026-01-02 04:06:28 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:06:28.438629 | orchestrator | 2026-01-02 04:06:28 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:06:31.486450 | orchestrator | 2026-01-02 04:06:31 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:06:31.488215 | orchestrator | 2026-01-02 04:06:31 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:06:31.488258 | orchestrator | 2026-01-02 04:06:31 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:06:34.535725 | orchestrator | 2026-01-02 04:06:34 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:06:34.537422 | orchestrator | 2026-01-02 04:06:34 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:06:34.537474 | orchestrator | 2026-01-02 04:06:34 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:06:37.582805 | orchestrator | 2026-01-02 04:06:37 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:06:37.583903 | orchestrator | 2026-01-02 04:06:37 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:06:37.584042 | orchestrator | 2026-01-02 04:06:37 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:06:40.632202 | orchestrator | 2026-01-02 04:06:40 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:06:40.632804 | orchestrator | 2026-01-02 04:06:40 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:06:40.632867 | orchestrator | 2026-01-02 04:06:40 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:06:43.677069 | orchestrator | 2026-01-02 04:06:43 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:06:43.679096 | orchestrator | 2026-01-02 04:06:43 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:06:43.679152 | orchestrator | 2026-01-02 04:06:43 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:06:46.723600 | orchestrator | 2026-01-02 04:06:46 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:06:46.725369 | orchestrator | 2026-01-02 04:06:46 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:06:46.725412 | orchestrator | 2026-01-02 04:06:46 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:06:49.763842 | orchestrator | 2026-01-02 04:06:49 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:06:49.766672 | orchestrator | 2026-01-02 04:06:49 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:06:49.766734 | orchestrator | 2026-01-02 04:06:49 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:06:52.819846 | orchestrator | 2026-01-02 04:06:52 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:06:52.821957 | orchestrator | 2026-01-02 04:06:52 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:06:52.822171 | orchestrator | 2026-01-02 04:06:52 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:06:55.867418 | orchestrator | 2026-01-02 04:06:55 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:06:55.870267 | orchestrator | 2026-01-02 04:06:55 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:06:55.870387 | orchestrator | 2026-01-02 04:06:55 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:06:58.915354 | orchestrator | 2026-01-02 04:06:58 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:06:58.916363 | orchestrator | 2026-01-02 04:06:58 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:06:58.916391 | orchestrator | 2026-01-02 04:06:58 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:07:01.962431 | orchestrator | 2026-01-02 04:07:01 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:07:01.963640 | orchestrator | 2026-01-02 04:07:01 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:07:01.963812 | orchestrator | 2026-01-02 04:07:01 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:07:05.017409 | orchestrator | 2026-01-02 04:07:05 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:07:05.017494 | orchestrator | 2026-01-02 04:07:05 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:07:05.017504 | orchestrator | 2026-01-02 04:07:05 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:07:08.063450 | orchestrator | 2026-01-02 04:07:08 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:07:08.066727 | orchestrator | 2026-01-02 04:07:08 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:07:08.066896 | orchestrator | 2026-01-02 04:07:08 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:07:11.122221 | orchestrator | 2026-01-02 04:07:11 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:07:11.124191 | orchestrator | 2026-01-02 04:07:11 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:07:11.124245 | orchestrator | 2026-01-02 04:07:11 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:07:14.168455 | orchestrator | 2026-01-02 04:07:14 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:07:14.170196 | orchestrator | 2026-01-02 04:07:14 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:07:14.170385 | orchestrator | 2026-01-02 04:07:14 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:07:17.215241 | orchestrator | 2026-01-02 04:07:17 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:07:17.216236 | orchestrator | 2026-01-02 04:07:17 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:07:17.216487 | orchestrator | 2026-01-02 04:07:17 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:07:20.260040 | orchestrator | 2026-01-02 04:07:20 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:07:20.262239 | orchestrator | 2026-01-02 04:07:20 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:07:20.262404 | orchestrator | 2026-01-02 04:07:20 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:07:23.309557 | orchestrator | 2026-01-02 04:07:23 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:07:23.312175 | orchestrator | 2026-01-02 04:07:23 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:07:23.312351 | orchestrator | 2026-01-02 04:07:23 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:07:26.358189 | orchestrator | 2026-01-02 04:07:26 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:07:26.359597 | orchestrator | 2026-01-02 04:07:26 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:07:26.359640 | orchestrator | 2026-01-02 04:07:26 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:07:29.406731 | orchestrator | 2026-01-02 04:07:29 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:07:29.409245 | orchestrator | 2026-01-02 04:07:29 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:07:29.409319 | orchestrator | 2026-01-02 04:07:29 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:07:32.455894 | orchestrator | 2026-01-02 04:07:32 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:07:32.457139 | orchestrator | 2026-01-02 04:07:32 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:07:32.457176 | orchestrator | 2026-01-02 04:07:32 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:07:35.500534 | orchestrator | 2026-01-02 04:07:35 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:07:35.502346 | orchestrator | 2026-01-02 04:07:35 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:07:35.502403 | orchestrator | 2026-01-02 04:07:35 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:07:38.550192 | orchestrator | 2026-01-02 04:07:38 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:07:38.552405 | orchestrator | 2026-01-02 04:07:38 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:07:38.552437 | orchestrator | 2026-01-02 04:07:38 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:07:41.597198 | orchestrator | 2026-01-02 04:07:41 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:07:41.599340 | orchestrator | 2026-01-02 04:07:41 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:07:41.599381 | orchestrator | 2026-01-02 04:07:41 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:07:44.648026 | orchestrator | 2026-01-02 04:07:44 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:07:44.650555 | orchestrator | 2026-01-02 04:07:44 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:07:44.651182 | orchestrator | 2026-01-02 04:07:44 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:07:47.702190 | orchestrator | 2026-01-02 04:07:47 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:07:47.703798 | orchestrator | 2026-01-02 04:07:47 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:07:47.704005 | orchestrator | 2026-01-02 04:07:47 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:07:50.743625 | orchestrator | 2026-01-02 04:07:50 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:07:50.745123 | orchestrator | 2026-01-02 04:07:50 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:07:50.745167 | orchestrator | 2026-01-02 04:07:50 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:07:53.787448 | orchestrator | 2026-01-02 04:07:53 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:07:53.788954 | orchestrator | 2026-01-02 04:07:53 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:07:53.789011 | orchestrator | 2026-01-02 04:07:53 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:07:56.835679 | orchestrator | 2026-01-02 04:07:56 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:07:56.837513 | orchestrator | 2026-01-02 04:07:56 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:07:56.837782 | orchestrator | 2026-01-02 04:07:56 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:07:59.890549 | orchestrator | 2026-01-02 04:07:59 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:07:59.891967 | orchestrator | 2026-01-02 04:07:59 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:07:59.892127 | orchestrator | 2026-01-02 04:07:59 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:08:02.939311 | orchestrator | 2026-01-02 04:08:02 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:08:02.940991 | orchestrator | 2026-01-02 04:08:02 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:08:02.941035 | orchestrator | 2026-01-02 04:08:02 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:08:05.990508 | orchestrator | 2026-01-02 04:08:05 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:08:05.992043 | orchestrator | 2026-01-02 04:08:05 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:08:05.992109 | orchestrator | 2026-01-02 04:08:05 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:08:09.039558 | orchestrator | 2026-01-02 04:08:09 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:08:09.041418 | orchestrator | 2026-01-02 04:08:09 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:08:09.041453 | orchestrator | 2026-01-02 04:08:09 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:08:12.082844 | orchestrator | 2026-01-02 04:08:12 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:08:12.083905 | orchestrator | 2026-01-02 04:08:12 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:08:12.083948 | orchestrator | 2026-01-02 04:08:12 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:08:15.126624 | orchestrator | 2026-01-02 04:08:15 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:08:15.129261 | orchestrator | 2026-01-02 04:08:15 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:08:15.129300 | orchestrator | 2026-01-02 04:08:15 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:08:18.169648 | orchestrator | 2026-01-02 04:08:18 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:08:18.171346 | orchestrator | 2026-01-02 04:08:18 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:08:18.171395 | orchestrator | 2026-01-02 04:08:18 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:08:21.215653 | orchestrator | 2026-01-02 04:08:21 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:08:21.217837 | orchestrator | 2026-01-02 04:08:21 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:08:21.217869 | orchestrator | 2026-01-02 04:08:21 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:08:24.261746 | orchestrator | 2026-01-02 04:08:24 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:08:24.263035 | orchestrator | 2026-01-02 04:08:24 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:08:24.263123 | orchestrator | 2026-01-02 04:08:24 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:08:27.312721 | orchestrator | 2026-01-02 04:08:27 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:08:27.315316 | orchestrator | 2026-01-02 04:08:27 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:08:27.315376 | orchestrator | 2026-01-02 04:08:27 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:08:30.367513 | orchestrator | 2026-01-02 04:08:30 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:08:30.369182 | orchestrator | 2026-01-02 04:08:30 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:08:30.369359 | orchestrator | 2026-01-02 04:08:30 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:08:33.418573 | orchestrator | 2026-01-02 04:08:33 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:08:33.419910 | orchestrator | 2026-01-02 04:08:33 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:08:33.419944 | orchestrator | 2026-01-02 04:08:33 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:08:36.460695 | orchestrator | 2026-01-02 04:08:36 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:08:36.462159 | orchestrator | 2026-01-02 04:08:36 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:08:36.462207 | orchestrator | 2026-01-02 04:08:36 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:08:39.509751 | orchestrator | 2026-01-02 04:08:39 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:08:39.511017 | orchestrator | 2026-01-02 04:08:39 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:08:39.511131 | orchestrator | 2026-01-02 04:08:39 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:08:42.548723 | orchestrator | 2026-01-02 04:08:42 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:08:42.550318 | orchestrator | 2026-01-02 04:08:42 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:08:42.550373 | orchestrator | 2026-01-02 04:08:42 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:08:45.597309 | orchestrator | 2026-01-02 04:08:45 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:08:45.598926 | orchestrator | 2026-01-02 04:08:45 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:08:45.599062 | orchestrator | 2026-01-02 04:08:45 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:08:48.646007 | orchestrator | 2026-01-02 04:08:48 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:08:48.647834 | orchestrator | 2026-01-02 04:08:48 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:08:48.647992 | orchestrator | 2026-01-02 04:08:48 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:08:51.694553 | orchestrator | 2026-01-02 04:08:51 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:08:51.695554 | orchestrator | 2026-01-02 04:08:51 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:08:51.695704 | orchestrator | 2026-01-02 04:08:51 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:08:54.739591 | orchestrator | 2026-01-02 04:08:54 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:08:54.741020 | orchestrator | 2026-01-02 04:08:54 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:08:54.741065 | orchestrator | 2026-01-02 04:08:54 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:08:57.787350 | orchestrator | 2026-01-02 04:08:57 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:08:57.788227 | orchestrator | 2026-01-02 04:08:57 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:08:57.788250 | orchestrator | 2026-01-02 04:08:57 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:09:00.830869 | orchestrator | 2026-01-02 04:09:00 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:09:00.832750 | orchestrator | 2026-01-02 04:09:00 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:09:00.832814 | orchestrator | 2026-01-02 04:09:00 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:09:03.873189 | orchestrator | 2026-01-02 04:09:03 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:09:03.875322 | orchestrator | 2026-01-02 04:09:03 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:09:03.875440 | orchestrator | 2026-01-02 04:09:03 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:09:06.919861 | orchestrator | 2026-01-02 04:09:06 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:09:06.922468 | orchestrator | 2026-01-02 04:09:06 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:09:06.922579 | orchestrator | 2026-01-02 04:09:06 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:09:09.968608 | orchestrator | 2026-01-02 04:09:09 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:09:09.970101 | orchestrator | 2026-01-02 04:09:09 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:09:09.970150 | orchestrator | 2026-01-02 04:09:09 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:09:13.024137 | orchestrator | 2026-01-02 04:09:13 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:09:13.025455 | orchestrator | 2026-01-02 04:09:13 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:09:13.025483 | orchestrator | 2026-01-02 04:09:13 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:09:16.069459 | orchestrator | 2026-01-02 04:09:16 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:09:16.070950 | orchestrator | 2026-01-02 04:09:16 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:09:16.070993 | orchestrator | 2026-01-02 04:09:16 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:09:19.114295 | orchestrator | 2026-01-02 04:09:19 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:09:19.117232 | orchestrator | 2026-01-02 04:09:19 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:09:19.117306 | orchestrator | 2026-01-02 04:09:19 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:09:22.160524 | orchestrator | 2026-01-02 04:09:22 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:09:22.162440 | orchestrator | 2026-01-02 04:09:22 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:09:22.162506 | orchestrator | 2026-01-02 04:09:22 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:09:25.204318 | orchestrator | 2026-01-02 04:09:25 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:09:25.205847 | orchestrator | 2026-01-02 04:09:25 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:09:25.206443 | orchestrator | 2026-01-02 04:09:25 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:09:28.250007 | orchestrator | 2026-01-02 04:09:28 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:09:28.251854 | orchestrator | 2026-01-02 04:09:28 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:09:28.251927 | orchestrator | 2026-01-02 04:09:28 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:09:31.297106 | orchestrator | 2026-01-02 04:09:31 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:09:31.298709 | orchestrator | 2026-01-02 04:09:31 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:09:31.298804 | orchestrator | 2026-01-02 04:09:31 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:09:34.336935 | orchestrator | 2026-01-02 04:09:34 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:09:34.339392 | orchestrator | 2026-01-02 04:09:34 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:09:34.339427 | orchestrator | 2026-01-02 04:09:34 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:09:37.383245 | orchestrator | 2026-01-02 04:09:37 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:09:37.384989 | orchestrator | 2026-01-02 04:09:37 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:09:37.385095 | orchestrator | 2026-01-02 04:09:37 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:09:40.435945 | orchestrator | 2026-01-02 04:09:40 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:09:40.438455 | orchestrator | 2026-01-02 04:09:40 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:09:40.438549 | orchestrator | 2026-01-02 04:09:40 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:09:43.486278 | orchestrator | 2026-01-02 04:09:43 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:09:43.487997 | orchestrator | 2026-01-02 04:09:43 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:09:43.488034 | orchestrator | 2026-01-02 04:09:43 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:09:46.536786 | orchestrator | 2026-01-02 04:09:46 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:09:46.539119 | orchestrator | 2026-01-02 04:09:46 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:09:46.539241 | orchestrator | 2026-01-02 04:09:46 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:09:49.586376 | orchestrator | 2026-01-02 04:09:49 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:09:49.588536 | orchestrator | 2026-01-02 04:09:49 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:09:49.589716 | orchestrator | 2026-01-02 04:09:49 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:09:52.635338 | orchestrator | 2026-01-02 04:09:52 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:09:52.637889 | orchestrator | 2026-01-02 04:09:52 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:09:52.638092 | orchestrator | 2026-01-02 04:09:52 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:09:55.683221 | orchestrator | 2026-01-02 04:09:55 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:09:55.685194 | orchestrator | 2026-01-02 04:09:55 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:09:55.685262 | orchestrator | 2026-01-02 04:09:55 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:09:58.731417 | orchestrator | 2026-01-02 04:09:58 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:09:58.732822 | orchestrator | 2026-01-02 04:09:58 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:09:58.732879 | orchestrator | 2026-01-02 04:09:58 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:10:01.785550 | orchestrator | 2026-01-02 04:10:01 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:10:01.787092 | orchestrator | 2026-01-02 04:10:01 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:10:01.787201 | orchestrator | 2026-01-02 04:10:01 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:10:04.834665 | orchestrator | 2026-01-02 04:10:04 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:10:04.837480 | orchestrator | 2026-01-02 04:10:04 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:10:04.837690 | orchestrator | 2026-01-02 04:10:04 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:10:07.881668 | orchestrator | 2026-01-02 04:10:07 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:10:07.883191 | orchestrator | 2026-01-02 04:10:07 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:10:07.883326 | orchestrator | 2026-01-02 04:10:07 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:10:10.924952 | orchestrator | 2026-01-02 04:10:10 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:10:10.925035 | orchestrator | 2026-01-02 04:10:10 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:10:10.925045 | orchestrator | 2026-01-02 04:10:10 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:10:13.973216 | orchestrator | 2026-01-02 04:10:13 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:10:13.975460 | orchestrator | 2026-01-02 04:10:13 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:10:13.975506 | orchestrator | 2026-01-02 04:10:13 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:10:17.019060 | orchestrator | 2026-01-02 04:10:17 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:10:17.020220 | orchestrator | 2026-01-02 04:10:17 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:10:17.020265 | orchestrator | 2026-01-02 04:10:17 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:10:20.064624 | orchestrator | 2026-01-02 04:10:20 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:10:20.066572 | orchestrator | 2026-01-02 04:10:20 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:10:20.066831 | orchestrator | 2026-01-02 04:10:20 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:10:23.114727 | orchestrator | 2026-01-02 04:10:23 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:10:23.116255 | orchestrator | 2026-01-02 04:10:23 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:10:23.116315 | orchestrator | 2026-01-02 04:10:23 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:10:26.164401 | orchestrator | 2026-01-02 04:10:26 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:10:26.165691 | orchestrator | 2026-01-02 04:10:26 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:10:26.165721 | orchestrator | 2026-01-02 04:10:26 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:10:29.214414 | orchestrator | 2026-01-02 04:10:29 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:10:29.215871 | orchestrator | 2026-01-02 04:10:29 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:10:29.215950 | orchestrator | 2026-01-02 04:10:29 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:10:32.258918 | orchestrator | 2026-01-02 04:10:32 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:10:32.260420 | orchestrator | 2026-01-02 04:10:32 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:10:32.260468 | orchestrator | 2026-01-02 04:10:32 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:10:35.307508 | orchestrator | 2026-01-02 04:10:35 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:10:35.308470 | orchestrator | 2026-01-02 04:10:35 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:10:35.308570 | orchestrator | 2026-01-02 04:10:35 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:10:38.352656 | orchestrator | 2026-01-02 04:10:38 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:10:38.354461 | orchestrator | 2026-01-02 04:10:38 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:10:38.354514 | orchestrator | 2026-01-02 04:10:38 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:10:41.397298 | orchestrator | 2026-01-02 04:10:41 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:10:41.398973 | orchestrator | 2026-01-02 04:10:41 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:10:41.399008 | orchestrator | 2026-01-02 04:10:41 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:10:44.446581 | orchestrator | 2026-01-02 04:10:44 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:10:44.448369 | orchestrator | 2026-01-02 04:10:44 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:10:44.448432 | orchestrator | 2026-01-02 04:10:44 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:10:47.495086 | orchestrator | 2026-01-02 04:10:47 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:10:47.497108 | orchestrator | 2026-01-02 04:10:47 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:10:47.497187 | orchestrator | 2026-01-02 04:10:47 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:10:50.544006 | orchestrator | 2026-01-02 04:10:50 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:10:50.545770 | orchestrator | 2026-01-02 04:10:50 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:10:50.545841 | orchestrator | 2026-01-02 04:10:50 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:10:53.590721 | orchestrator | 2026-01-02 04:10:53 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:10:53.592613 | orchestrator | 2026-01-02 04:10:53 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:10:53.592687 | orchestrator | 2026-01-02 04:10:53 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:10:56.631325 | orchestrator | 2026-01-02 04:10:56 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:10:56.631588 | orchestrator | 2026-01-02 04:10:56 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:10:56.631627 | orchestrator | 2026-01-02 04:10:56 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:10:59.678011 | orchestrator | 2026-01-02 04:10:59 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:10:59.679517 | orchestrator | 2026-01-02 04:10:59 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:10:59.679584 | orchestrator | 2026-01-02 04:10:59 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:11:02.726668 | orchestrator | 2026-01-02 04:11:02 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:11:02.728069 | orchestrator | 2026-01-02 04:11:02 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:11:02.728242 | orchestrator | 2026-01-02 04:11:02 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:11:05.782228 | orchestrator | 2026-01-02 04:11:05 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:11:05.784244 | orchestrator | 2026-01-02 04:11:05 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:11:05.784652 | orchestrator | 2026-01-02 04:11:05 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:11:08.834649 | orchestrator | 2026-01-02 04:11:08 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:11:08.836235 | orchestrator | 2026-01-02 04:11:08 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:11:08.836391 | orchestrator | 2026-01-02 04:11:08 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:11:11.891039 | orchestrator | 2026-01-02 04:11:11 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:11:11.892746 | orchestrator | 2026-01-02 04:11:11 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:11:11.893049 | orchestrator | 2026-01-02 04:11:11 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:11:14.945375 | orchestrator | 2026-01-02 04:11:14 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:11:14.946967 | orchestrator | 2026-01-02 04:11:14 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:11:14.947017 | orchestrator | 2026-01-02 04:11:14 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:11:17.994574 | orchestrator | 2026-01-02 04:11:17 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:11:17.997586 | orchestrator | 2026-01-02 04:11:17 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:11:17.997646 | orchestrator | 2026-01-02 04:11:17 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:11:21.056672 | orchestrator | 2026-01-02 04:11:21 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:11:21.058873 | orchestrator | 2026-01-02 04:11:21 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:11:21.058921 | orchestrator | 2026-01-02 04:11:21 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:11:24.101311 | orchestrator | 2026-01-02 04:11:24 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:11:24.102304 | orchestrator | 2026-01-02 04:11:24 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:11:24.102337 | orchestrator | 2026-01-02 04:11:24 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:11:27.154579 | orchestrator | 2026-01-02 04:11:27 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:11:27.156155 | orchestrator | 2026-01-02 04:11:27 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:11:27.156565 | orchestrator | 2026-01-02 04:11:27 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:11:30.209629 | orchestrator | 2026-01-02 04:11:30 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:11:30.211000 | orchestrator | 2026-01-02 04:11:30 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:11:30.211185 | orchestrator | 2026-01-02 04:11:30 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:11:33.249968 | orchestrator | 2026-01-02 04:11:33 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:11:33.251080 | orchestrator | 2026-01-02 04:11:33 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:11:33.251227 | orchestrator | 2026-01-02 04:11:33 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:11:36.297741 | orchestrator | 2026-01-02 04:11:36 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:11:36.298895 | orchestrator | 2026-01-02 04:11:36 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:11:36.299238 | orchestrator | 2026-01-02 04:11:36 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:11:39.348170 | orchestrator | 2026-01-02 04:11:39 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:11:39.349339 | orchestrator | 2026-01-02 04:11:39 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:11:39.349364 | orchestrator | 2026-01-02 04:11:39 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:11:42.396499 | orchestrator | 2026-01-02 04:11:42 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:11:42.398377 | orchestrator | 2026-01-02 04:11:42 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:11:42.398491 | orchestrator | 2026-01-02 04:11:42 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:11:45.445552 | orchestrator | 2026-01-02 04:11:45 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:11:45.446866 | orchestrator | 2026-01-02 04:11:45 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:11:45.453918 | orchestrator | 2026-01-02 04:11:45 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:11:48.491831 | orchestrator | 2026-01-02 04:11:48 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:11:48.493301 | orchestrator | 2026-01-02 04:11:48 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:11:48.493397 | orchestrator | 2026-01-02 04:11:48 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:11:51.536153 | orchestrator | 2026-01-02 04:11:51 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:11:51.537203 | orchestrator | 2026-01-02 04:11:51 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:11:51.537236 | orchestrator | 2026-01-02 04:11:51 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:11:54.587056 | orchestrator | 2026-01-02 04:11:54 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:11:54.588942 | orchestrator | 2026-01-02 04:11:54 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:11:54.589004 | orchestrator | 2026-01-02 04:11:54 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:11:57.636193 | orchestrator | 2026-01-02 04:11:57 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:11:57.636747 | orchestrator | 2026-01-02 04:11:57 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:11:57.637145 | orchestrator | 2026-01-02 04:11:57 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:12:00.684035 | orchestrator | 2026-01-02 04:12:00 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:12:00.685621 | orchestrator | 2026-01-02 04:12:00 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:12:00.685680 | orchestrator | 2026-01-02 04:12:00 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:12:03.729596 | orchestrator | 2026-01-02 04:12:03 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:12:03.730901 | orchestrator | 2026-01-02 04:12:03 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:12:03.731025 | orchestrator | 2026-01-02 04:12:03 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:12:06.783703 | orchestrator | 2026-01-02 04:12:06 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:12:06.785071 | orchestrator | 2026-01-02 04:12:06 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:12:06.785178 | orchestrator | 2026-01-02 04:12:06 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:12:09.838486 | orchestrator | 2026-01-02 04:12:09 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:12:09.840395 | orchestrator | 2026-01-02 04:12:09 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:12:09.840462 | orchestrator | 2026-01-02 04:12:09 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:12:12.889493 | orchestrator | 2026-01-02 04:12:12 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:12:12.890992 | orchestrator | 2026-01-02 04:12:12 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:12:12.891056 | orchestrator | 2026-01-02 04:12:12 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:12:15.935526 | orchestrator | 2026-01-02 04:12:15 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:12:15.937380 | orchestrator | 2026-01-02 04:12:15 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:12:15.937660 | orchestrator | 2026-01-02 04:12:15 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:12:18.985919 | orchestrator | 2026-01-02 04:12:18 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:12:18.987851 | orchestrator | 2026-01-02 04:12:18 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:12:18.988004 | orchestrator | 2026-01-02 04:12:18 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:12:22.032969 | orchestrator | 2026-01-02 04:12:22 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:12:22.033845 | orchestrator | 2026-01-02 04:12:22 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:12:22.033956 | orchestrator | 2026-01-02 04:12:22 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:12:25.081573 | orchestrator | 2026-01-02 04:12:25 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:12:25.083392 | orchestrator | 2026-01-02 04:12:25 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:12:25.083532 | orchestrator | 2026-01-02 04:12:25 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:12:28.128619 | orchestrator | 2026-01-02 04:12:28 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:12:28.129913 | orchestrator | 2026-01-02 04:12:28 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:12:28.129944 | orchestrator | 2026-01-02 04:12:28 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:12:31.182128 | orchestrator | 2026-01-02 04:12:31 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:12:31.183966 | orchestrator | 2026-01-02 04:12:31 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:12:31.184026 | orchestrator | 2026-01-02 04:12:31 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:12:34.228491 | orchestrator | 2026-01-02 04:12:34 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:12:34.229459 | orchestrator | 2026-01-02 04:12:34 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:12:34.229534 | orchestrator | 2026-01-02 04:12:34 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:12:37.277257 | orchestrator | 2026-01-02 04:12:37 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:12:37.278174 | orchestrator | 2026-01-02 04:12:37 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:12:37.278472 | orchestrator | 2026-01-02 04:12:37 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:12:40.322321 | orchestrator | 2026-01-02 04:12:40 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:12:40.323677 | orchestrator | 2026-01-02 04:12:40 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:12:40.323724 | orchestrator | 2026-01-02 04:12:40 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:12:43.363191 | orchestrator | 2026-01-02 04:12:43 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:12:43.366001 | orchestrator | 2026-01-02 04:12:43 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:12:43.366375 | orchestrator | 2026-01-02 04:12:43 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:12:46.406310 | orchestrator | 2026-01-02 04:12:46 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:12:46.408254 | orchestrator | 2026-01-02 04:12:46 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:12:46.408342 | orchestrator | 2026-01-02 04:12:46 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:12:49.457200 | orchestrator | 2026-01-02 04:12:49 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:12:49.459765 | orchestrator | 2026-01-02 04:12:49 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:12:49.459818 | orchestrator | 2026-01-02 04:12:49 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:12:52.510689 | orchestrator | 2026-01-02 04:12:52 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:12:52.511231 | orchestrator | 2026-01-02 04:12:52 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:12:52.511279 | orchestrator | 2026-01-02 04:12:52 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:12:55.561817 | orchestrator | 2026-01-02 04:12:55 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:12:55.563456 | orchestrator | 2026-01-02 04:12:55 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:12:55.563502 | orchestrator | 2026-01-02 04:12:55 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:12:58.613428 | orchestrator | 2026-01-02 04:12:58 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:12:58.616352 | orchestrator | 2026-01-02 04:12:58 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:12:58.616413 | orchestrator | 2026-01-02 04:12:58 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:13:01.664405 | orchestrator | 2026-01-02 04:13:01 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:13:01.668480 | orchestrator | 2026-01-02 04:13:01 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:13:01.668554 | orchestrator | 2026-01-02 04:13:01 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:13:04.714554 | orchestrator | 2026-01-02 04:13:04 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:13:04.718249 | orchestrator | 2026-01-02 04:13:04 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:13:04.718311 | orchestrator | 2026-01-02 04:13:04 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:13:07.765177 | orchestrator | 2026-01-02 04:13:07 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:13:07.765935 | orchestrator | 2026-01-02 04:13:07 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:13:07.766011 | orchestrator | 2026-01-02 04:13:07 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:13:10.819556 | orchestrator | 2026-01-02 04:13:10 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:13:10.821803 | orchestrator | 2026-01-02 04:13:10 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:13:10.822169 | orchestrator | 2026-01-02 04:13:10 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:13:13.866298 | orchestrator | 2026-01-02 04:13:13 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:13:13.867669 | orchestrator | 2026-01-02 04:13:13 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:13:13.867697 | orchestrator | 2026-01-02 04:13:13 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:13:16.905740 | orchestrator | 2026-01-02 04:13:16 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:13:16.907701 | orchestrator | 2026-01-02 04:13:16 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:13:16.907781 | orchestrator | 2026-01-02 04:13:16 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:13:19.954002 | orchestrator | 2026-01-02 04:13:19 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:13:19.956143 | orchestrator | 2026-01-02 04:13:19 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:13:19.956197 | orchestrator | 2026-01-02 04:13:19 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:13:23.001780 | orchestrator | 2026-01-02 04:13:23 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:13:23.003885 | orchestrator | 2026-01-02 04:13:23 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:13:23.003952 | orchestrator | 2026-01-02 04:13:23 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:13:26.051699 | orchestrator | 2026-01-02 04:13:26 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:13:26.053999 | orchestrator | 2026-01-02 04:13:26 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:13:26.054149 | orchestrator | 2026-01-02 04:13:26 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:13:29.095222 | orchestrator | 2026-01-02 04:13:29 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:13:29.096973 | orchestrator | 2026-01-02 04:13:29 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:13:29.097016 | orchestrator | 2026-01-02 04:13:29 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:13:32.140758 | orchestrator | 2026-01-02 04:13:32 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:13:32.142948 | orchestrator | 2026-01-02 04:13:32 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:13:32.143129 | orchestrator | 2026-01-02 04:13:32 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:13:35.192335 | orchestrator | 2026-01-02 04:13:35 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:13:35.193790 | orchestrator | 2026-01-02 04:13:35 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:13:35.193878 | orchestrator | 2026-01-02 04:13:35 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:13:38.243191 | orchestrator | 2026-01-02 04:13:38 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:13:38.244376 | orchestrator | 2026-01-02 04:13:38 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:13:38.244603 | orchestrator | 2026-01-02 04:13:38 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:13:41.285438 | orchestrator | 2026-01-02 04:13:41 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:13:41.286797 | orchestrator | 2026-01-02 04:13:41 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:13:41.286852 | orchestrator | 2026-01-02 04:13:41 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:13:44.336568 | orchestrator | 2026-01-02 04:13:44 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:13:44.338652 | orchestrator | 2026-01-02 04:13:44 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:13:44.338742 | orchestrator | 2026-01-02 04:13:44 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:13:47.385388 | orchestrator | 2026-01-02 04:13:47 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:13:47.387470 | orchestrator | 2026-01-02 04:13:47 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:13:47.387596 | orchestrator | 2026-01-02 04:13:47 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:13:50.431531 | orchestrator | 2026-01-02 04:13:50 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:13:50.433281 | orchestrator | 2026-01-02 04:13:50 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:13:50.433340 | orchestrator | 2026-01-02 04:13:50 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:13:53.476405 | orchestrator | 2026-01-02 04:13:53 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:13:53.477601 | orchestrator | 2026-01-02 04:13:53 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:13:53.477675 | orchestrator | 2026-01-02 04:13:53 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:13:56.521674 | orchestrator | 2026-01-02 04:13:56 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:13:56.523766 | orchestrator | 2026-01-02 04:13:56 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:13:56.523803 | orchestrator | 2026-01-02 04:13:56 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:13:59.568546 | orchestrator | 2026-01-02 04:13:59 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:13:59.571293 | orchestrator | 2026-01-02 04:13:59 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:13:59.571388 | orchestrator | 2026-01-02 04:13:59 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:14:02.606536 | orchestrator | 2026-01-02 04:14:02 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:14:02.607818 | orchestrator | 2026-01-02 04:14:02 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:14:02.607977 | orchestrator | 2026-01-02 04:14:02 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:14:05.654654 | orchestrator | 2026-01-02 04:14:05 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:14:05.655705 | orchestrator | 2026-01-02 04:14:05 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:14:05.655840 | orchestrator | 2026-01-02 04:14:05 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:14:08.705845 | orchestrator | 2026-01-02 04:14:08 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:14:08.707664 | orchestrator | 2026-01-02 04:14:08 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:14:08.707783 | orchestrator | 2026-01-02 04:14:08 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:14:11.759220 | orchestrator | 2026-01-02 04:14:11 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:14:11.759970 | orchestrator | 2026-01-02 04:14:11 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:14:11.760081 | orchestrator | 2026-01-02 04:14:11 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:14:14.804291 | orchestrator | 2026-01-02 04:14:14 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:14:14.805093 | orchestrator | 2026-01-02 04:14:14 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:14:14.805304 | orchestrator | 2026-01-02 04:14:14 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:14:17.851313 | orchestrator | 2026-01-02 04:14:17 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:14:17.852727 | orchestrator | 2026-01-02 04:14:17 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:14:17.852849 | orchestrator | 2026-01-02 04:14:17 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:14:20.899095 | orchestrator | 2026-01-02 04:14:20 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:14:20.900580 | orchestrator | 2026-01-02 04:14:20 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:14:20.900645 | orchestrator | 2026-01-02 04:14:20 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:14:23.946279 | orchestrator | 2026-01-02 04:14:23 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:14:23.947622 | orchestrator | 2026-01-02 04:14:23 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:14:23.947659 | orchestrator | 2026-01-02 04:14:23 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:14:26.996578 | orchestrator | 2026-01-02 04:14:26 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:14:27.000188 | orchestrator | 2026-01-02 04:14:26 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:14:27.000297 | orchestrator | 2026-01-02 04:14:27 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:14:30.043379 | orchestrator | 2026-01-02 04:14:30 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:14:30.045374 | orchestrator | 2026-01-02 04:14:30 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:14:30.045407 | orchestrator | 2026-01-02 04:14:30 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:14:33.083223 | orchestrator | 2026-01-02 04:14:33 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:14:33.084076 | orchestrator | 2026-01-02 04:14:33 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:14:33.084827 | orchestrator | 2026-01-02 04:14:33 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:14:36.131796 | orchestrator | 2026-01-02 04:14:36 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:14:36.133726 | orchestrator | 2026-01-02 04:14:36 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:14:36.134749 | orchestrator | 2026-01-02 04:14:36 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:14:39.177972 | orchestrator | 2026-01-02 04:14:39 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:14:39.179274 | orchestrator | 2026-01-02 04:14:39 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:14:39.179744 | orchestrator | 2026-01-02 04:14:39 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:14:42.223560 | orchestrator | 2026-01-02 04:14:42 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:14:42.225174 | orchestrator | 2026-01-02 04:14:42 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:14:42.225231 | orchestrator | 2026-01-02 04:14:42 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:14:45.272592 | orchestrator | 2026-01-02 04:14:45 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:14:45.274881 | orchestrator | 2026-01-02 04:14:45 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:14:45.274953 | orchestrator | 2026-01-02 04:14:45 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:14:48.322189 | orchestrator | 2026-01-02 04:14:48 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:14:48.323793 | orchestrator | 2026-01-02 04:14:48 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:14:48.323863 | orchestrator | 2026-01-02 04:14:48 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:14:51.368646 | orchestrator | 2026-01-02 04:14:51 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:14:51.370063 | orchestrator | 2026-01-02 04:14:51 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:14:51.370080 | orchestrator | 2026-01-02 04:14:51 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:14:54.419361 | orchestrator | 2026-01-02 04:14:54 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:14:54.421196 | orchestrator | 2026-01-02 04:14:54 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:14:54.421290 | orchestrator | 2026-01-02 04:14:54 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:14:57.468228 | orchestrator | 2026-01-02 04:14:57 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:14:57.469610 | orchestrator | 2026-01-02 04:14:57 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:14:57.469687 | orchestrator | 2026-01-02 04:14:57 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:15:00.512177 | orchestrator | 2026-01-02 04:15:00 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:15:00.513165 | orchestrator | 2026-01-02 04:15:00 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:15:00.513211 | orchestrator | 2026-01-02 04:15:00 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:15:03.562968 | orchestrator | 2026-01-02 04:15:03 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:15:03.564301 | orchestrator | 2026-01-02 04:15:03 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:15:03.564342 | orchestrator | 2026-01-02 04:15:03 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:15:06.610589 | orchestrator | 2026-01-02 04:15:06 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:15:06.613461 | orchestrator | 2026-01-02 04:15:06 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:15:06.613513 | orchestrator | 2026-01-02 04:15:06 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:15:09.654585 | orchestrator | 2026-01-02 04:15:09 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:15:09.656613 | orchestrator | 2026-01-02 04:15:09 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:15:09.656674 | orchestrator | 2026-01-02 04:15:09 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:15:12.701351 | orchestrator | 2026-01-02 04:15:12 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:15:12.703594 | orchestrator | 2026-01-02 04:15:12 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:15:12.703639 | orchestrator | 2026-01-02 04:15:12 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:15:15.745066 | orchestrator | 2026-01-02 04:15:15 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:15:15.747389 | orchestrator | 2026-01-02 04:15:15 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:15:15.747446 | orchestrator | 2026-01-02 04:15:15 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:15:18.792427 | orchestrator | 2026-01-02 04:15:18 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:15:18.794903 | orchestrator | 2026-01-02 04:15:18 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:15:18.794963 | orchestrator | 2026-01-02 04:15:18 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:15:21.840328 | orchestrator | 2026-01-02 04:15:21 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:15:21.842309 | orchestrator | 2026-01-02 04:15:21 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:15:21.842357 | orchestrator | 2026-01-02 04:15:21 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:15:24.888539 | orchestrator | 2026-01-02 04:15:24 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:15:24.891563 | orchestrator | 2026-01-02 04:15:24 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:15:24.891680 | orchestrator | 2026-01-02 04:15:24 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:15:27.936906 | orchestrator | 2026-01-02 04:15:27 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:15:27.939026 | orchestrator | 2026-01-02 04:15:27 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:15:27.939850 | orchestrator | 2026-01-02 04:15:27 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:15:30.993353 | orchestrator | 2026-01-02 04:15:30 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:15:30.995381 | orchestrator | 2026-01-02 04:15:30 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:15:30.995435 | orchestrator | 2026-01-02 04:15:30 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:15:34.053515 | orchestrator | 2026-01-02 04:15:34 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:15:34.054404 | orchestrator | 2026-01-02 04:15:34 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:15:34.054518 | orchestrator | 2026-01-02 04:15:34 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:15:37.098553 | orchestrator | 2026-01-02 04:15:37 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:15:37.101072 | orchestrator | 2026-01-02 04:15:37 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:15:37.101157 | orchestrator | 2026-01-02 04:15:37 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:15:40.144070 | orchestrator | 2026-01-02 04:15:40 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:15:40.144903 | orchestrator | 2026-01-02 04:15:40 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:15:40.144939 | orchestrator | 2026-01-02 04:15:40 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:15:43.187954 | orchestrator | 2026-01-02 04:15:43 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:15:43.190320 | orchestrator | 2026-01-02 04:15:43 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:15:43.190355 | orchestrator | 2026-01-02 04:15:43 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:15:46.240383 | orchestrator | 2026-01-02 04:15:46 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:15:46.242109 | orchestrator | 2026-01-02 04:15:46 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:15:46.242143 | orchestrator | 2026-01-02 04:15:46 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:15:49.293170 | orchestrator | 2026-01-02 04:15:49 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:15:49.294487 | orchestrator | 2026-01-02 04:15:49 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:15:49.294591 | orchestrator | 2026-01-02 04:15:49 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:15:52.341742 | orchestrator | 2026-01-02 04:15:52 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:15:52.344461 | orchestrator | 2026-01-02 04:15:52 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:15:52.344585 | orchestrator | 2026-01-02 04:15:52 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:15:55.388570 | orchestrator | 2026-01-02 04:15:55 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:15:55.390650 | orchestrator | 2026-01-02 04:15:55 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:15:55.391073 | orchestrator | 2026-01-02 04:15:55 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:15:58.435267 | orchestrator | 2026-01-02 04:15:58 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:15:58.436757 | orchestrator | 2026-01-02 04:15:58 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:15:58.436801 | orchestrator | 2026-01-02 04:15:58 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:16:01.484271 | orchestrator | 2026-01-02 04:16:01 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:16:01.486405 | orchestrator | 2026-01-02 04:16:01 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:16:01.486462 | orchestrator | 2026-01-02 04:16:01 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:16:04.532270 | orchestrator | 2026-01-02 04:16:04 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:16:04.533702 | orchestrator | 2026-01-02 04:16:04 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:16:04.533777 | orchestrator | 2026-01-02 04:16:04 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:16:07.581518 | orchestrator | 2026-01-02 04:16:07 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:16:07.583036 | orchestrator | 2026-01-02 04:16:07 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:16:07.583086 | orchestrator | 2026-01-02 04:16:07 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:16:10.625612 | orchestrator | 2026-01-02 04:16:10 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:16:10.627500 | orchestrator | 2026-01-02 04:16:10 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:16:10.627609 | orchestrator | 2026-01-02 04:16:10 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:16:13.669084 | orchestrator | 2026-01-02 04:16:13 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:16:13.670704 | orchestrator | 2026-01-02 04:16:13 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:16:13.670727 | orchestrator | 2026-01-02 04:16:13 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:16:16.717410 | orchestrator | 2026-01-02 04:16:16 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:16:16.718735 | orchestrator | 2026-01-02 04:16:16 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:16:16.718790 | orchestrator | 2026-01-02 04:16:16 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:16:19.764074 | orchestrator | 2026-01-02 04:16:19 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:16:19.765145 | orchestrator | 2026-01-02 04:16:19 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:16:19.765198 | orchestrator | 2026-01-02 04:16:19 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:16:22.810162 | orchestrator | 2026-01-02 04:16:22 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:16:22.811905 | orchestrator | 2026-01-02 04:16:22 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:16:22.812011 | orchestrator | 2026-01-02 04:16:22 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:16:25.856446 | orchestrator | 2026-01-02 04:16:25 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:16:25.858628 | orchestrator | 2026-01-02 04:16:25 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:16:25.858765 | orchestrator | 2026-01-02 04:16:25 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:16:28.904562 | orchestrator | 2026-01-02 04:16:28 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:16:28.907743 | orchestrator | 2026-01-02 04:16:28 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:16:28.907813 | orchestrator | 2026-01-02 04:16:28 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:16:31.958276 | orchestrator | 2026-01-02 04:16:31 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:16:31.959569 | orchestrator | 2026-01-02 04:16:31 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:16:31.959698 | orchestrator | 2026-01-02 04:16:31 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:16:35.008668 | orchestrator | 2026-01-02 04:16:35 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:16:35.010437 | orchestrator | 2026-01-02 04:16:35 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:16:35.010524 | orchestrator | 2026-01-02 04:16:35 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:16:38.054105 | orchestrator | 2026-01-02 04:16:38 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:16:38.056001 | orchestrator | 2026-01-02 04:16:38 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:16:38.056086 | orchestrator | 2026-01-02 04:16:38 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:16:41.101347 | orchestrator | 2026-01-02 04:16:41 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:16:41.103030 | orchestrator | 2026-01-02 04:16:41 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:16:41.103051 | orchestrator | 2026-01-02 04:16:41 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:16:44.146229 | orchestrator | 2026-01-02 04:16:44 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:16:44.148291 | orchestrator | 2026-01-02 04:16:44 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:16:44.148354 | orchestrator | 2026-01-02 04:16:44 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:16:47.191821 | orchestrator | 2026-01-02 04:16:47 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:16:47.193663 | orchestrator | 2026-01-02 04:16:47 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:16:47.193694 | orchestrator | 2026-01-02 04:16:47 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:16:50.236334 | orchestrator | 2026-01-02 04:16:50 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:16:50.237275 | orchestrator | 2026-01-02 04:16:50 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:16:50.237337 | orchestrator | 2026-01-02 04:16:50 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:16:53.283318 | orchestrator | 2026-01-02 04:16:53 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:16:53.285298 | orchestrator | 2026-01-02 04:16:53 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:16:53.285341 | orchestrator | 2026-01-02 04:16:53 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:16:56.335834 | orchestrator | 2026-01-02 04:16:56 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:16:56.338519 | orchestrator | 2026-01-02 04:16:56 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:16:56.338568 | orchestrator | 2026-01-02 04:16:56 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:16:59.387556 | orchestrator | 2026-01-02 04:16:59 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:16:59.388545 | orchestrator | 2026-01-02 04:16:59 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:16:59.388573 | orchestrator | 2026-01-02 04:16:59 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:17:02.434365 | orchestrator | 2026-01-02 04:17:02 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:17:02.434913 | orchestrator | 2026-01-02 04:17:02 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:17:02.435013 | orchestrator | 2026-01-02 04:17:02 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:17:05.477762 | orchestrator | 2026-01-02 04:17:05 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:17:05.477875 | orchestrator | 2026-01-02 04:17:05 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:17:05.477885 | orchestrator | 2026-01-02 04:17:05 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:17:08.520392 | orchestrator | 2026-01-02 04:17:08 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:17:08.522983 | orchestrator | 2026-01-02 04:17:08 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:17:08.523050 | orchestrator | 2026-01-02 04:17:08 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:17:11.567805 | orchestrator | 2026-01-02 04:17:11 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:17:11.569802 | orchestrator | 2026-01-02 04:17:11 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:17:11.569848 | orchestrator | 2026-01-02 04:17:11 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:17:14.615496 | orchestrator | 2026-01-02 04:17:14 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:17:14.617566 | orchestrator | 2026-01-02 04:17:14 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:17:14.617632 | orchestrator | 2026-01-02 04:17:14 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:17:17.667185 | orchestrator | 2026-01-02 04:17:17 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:17:17.668401 | orchestrator | 2026-01-02 04:17:17 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:17:17.668428 | orchestrator | 2026-01-02 04:17:17 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:17:20.716472 | orchestrator | 2026-01-02 04:17:20 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:17:20.718116 | orchestrator | 2026-01-02 04:17:20 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:17:20.718154 | orchestrator | 2026-01-02 04:17:20 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:17:23.762105 | orchestrator | 2026-01-02 04:17:23 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:17:23.763897 | orchestrator | 2026-01-02 04:17:23 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:17:23.764115 | orchestrator | 2026-01-02 04:17:23 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:17:26.808429 | orchestrator | 2026-01-02 04:17:26 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:17:26.812083 | orchestrator | 2026-01-02 04:17:26 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:17:26.812209 | orchestrator | 2026-01-02 04:17:26 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:17:29.855519 | orchestrator | 2026-01-02 04:17:29 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:17:29.856822 | orchestrator | 2026-01-02 04:17:29 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:17:29.857137 | orchestrator | 2026-01-02 04:17:29 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:17:32.909503 | orchestrator | 2026-01-02 04:17:32 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:17:32.910859 | orchestrator | 2026-01-02 04:17:32 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:17:32.910961 | orchestrator | 2026-01-02 04:17:32 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:17:35.953400 | orchestrator | 2026-01-02 04:17:35 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:17:35.954687 | orchestrator | 2026-01-02 04:17:35 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:17:35.954704 | orchestrator | 2026-01-02 04:17:35 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:17:38.995358 | orchestrator | 2026-01-02 04:17:38 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:17:38.997786 | orchestrator | 2026-01-02 04:17:38 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:17:38.997900 | orchestrator | 2026-01-02 04:17:38 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:17:42.036060 | orchestrator | 2026-01-02 04:17:42 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:17:42.036151 | orchestrator | 2026-01-02 04:17:42 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:17:42.036204 | orchestrator | 2026-01-02 04:17:42 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:17:45.083643 | orchestrator | 2026-01-02 04:17:45 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:17:45.087171 | orchestrator | 2026-01-02 04:17:45 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:17:45.087242 | orchestrator | 2026-01-02 04:17:45 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:17:48.136149 | orchestrator | 2026-01-02 04:17:48 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:17:48.137598 | orchestrator | 2026-01-02 04:17:48 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:17:48.137637 | orchestrator | 2026-01-02 04:17:48 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:17:51.184328 | orchestrator | 2026-01-02 04:17:51 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:17:51.185716 | orchestrator | 2026-01-02 04:17:51 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:17:51.185742 | orchestrator | 2026-01-02 04:17:51 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:17:54.231687 | orchestrator | 2026-01-02 04:17:54 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:17:54.232426 | orchestrator | 2026-01-02 04:17:54 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:17:54.232463 | orchestrator | 2026-01-02 04:17:54 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:17:57.273810 | orchestrator | 2026-01-02 04:17:57 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:17:57.275793 | orchestrator | 2026-01-02 04:17:57 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:17:57.275977 | orchestrator | 2026-01-02 04:17:57 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:18:00.320697 | orchestrator | 2026-01-02 04:18:00 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:18:00.322233 | orchestrator | 2026-01-02 04:18:00 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:18:00.322321 | orchestrator | 2026-01-02 04:18:00 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:18:03.370351 | orchestrator | 2026-01-02 04:18:03 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:18:03.372497 | orchestrator | 2026-01-02 04:18:03 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:18:03.372573 | orchestrator | 2026-01-02 04:18:03 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:18:06.414174 | orchestrator | 2026-01-02 04:18:06 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:18:06.415580 | orchestrator | 2026-01-02 04:18:06 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:18:06.415640 | orchestrator | 2026-01-02 04:18:06 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:18:09.458991 | orchestrator | 2026-01-02 04:18:09 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:18:09.460397 | orchestrator | 2026-01-02 04:18:09 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:18:09.460431 | orchestrator | 2026-01-02 04:18:09 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:18:12.504840 | orchestrator | 2026-01-02 04:18:12 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:18:12.505465 | orchestrator | 2026-01-02 04:18:12 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:18:12.505500 | orchestrator | 2026-01-02 04:18:12 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:18:15.555445 | orchestrator | 2026-01-02 04:18:15 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:18:15.556939 | orchestrator | 2026-01-02 04:18:15 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:18:15.557073 | orchestrator | 2026-01-02 04:18:15 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:18:18.605544 | orchestrator | 2026-01-02 04:18:18 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:18:18.606582 | orchestrator | 2026-01-02 04:18:18 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:18:18.606749 | orchestrator | 2026-01-02 04:18:18 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:18:21.657165 | orchestrator | 2026-01-02 04:18:21 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:18:21.659929 | orchestrator | 2026-01-02 04:18:21 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:18:21.660000 | orchestrator | 2026-01-02 04:18:21 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:18:24.708201 | orchestrator | 2026-01-02 04:18:24 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:18:24.709286 | orchestrator | 2026-01-02 04:18:24 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:18:24.709424 | orchestrator | 2026-01-02 04:18:24 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:18:27.756643 | orchestrator | 2026-01-02 04:18:27 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:18:27.758199 | orchestrator | 2026-01-02 04:18:27 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:18:27.758437 | orchestrator | 2026-01-02 04:18:27 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:18:30.801642 | orchestrator | 2026-01-02 04:18:30 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:18:30.803176 | orchestrator | 2026-01-02 04:18:30 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:18:30.803230 | orchestrator | 2026-01-02 04:18:30 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:18:33.849768 | orchestrator | 2026-01-02 04:18:33 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:18:33.852064 | orchestrator | 2026-01-02 04:18:33 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:18:33.852114 | orchestrator | 2026-01-02 04:18:33 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:18:36.898988 | orchestrator | 2026-01-02 04:18:36 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:18:36.900091 | orchestrator | 2026-01-02 04:18:36 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:18:36.900121 | orchestrator | 2026-01-02 04:18:36 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:18:39.946557 | orchestrator | 2026-01-02 04:18:39 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:18:39.948825 | orchestrator | 2026-01-02 04:18:39 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:18:39.948910 | orchestrator | 2026-01-02 04:18:39 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:18:42.992449 | orchestrator | 2026-01-02 04:18:42 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:18:42.994718 | orchestrator | 2026-01-02 04:18:42 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:18:42.994865 | orchestrator | 2026-01-02 04:18:42 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:18:46.036250 | orchestrator | 2026-01-02 04:18:46 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:18:46.037534 | orchestrator | 2026-01-02 04:18:46 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:18:46.038201 | orchestrator | 2026-01-02 04:18:46 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:18:49.084863 | orchestrator | 2026-01-02 04:18:49 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:18:49.087545 | orchestrator | 2026-01-02 04:18:49 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:18:49.087643 | orchestrator | 2026-01-02 04:18:49 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:18:52.137999 | orchestrator | 2026-01-02 04:18:52 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:18:52.138852 | orchestrator | 2026-01-02 04:18:52 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:18:52.139004 | orchestrator | 2026-01-02 04:18:52 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:18:55.182176 | orchestrator | 2026-01-02 04:18:55 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:18:55.183038 | orchestrator | 2026-01-02 04:18:55 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:18:55.183172 | orchestrator | 2026-01-02 04:18:55 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:18:58.230595 | orchestrator | 2026-01-02 04:18:58 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:18:58.231812 | orchestrator | 2026-01-02 04:18:58 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:18:58.231928 | orchestrator | 2026-01-02 04:18:58 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:19:01.276991 | orchestrator | 2026-01-02 04:19:01 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:19:01.278943 | orchestrator | 2026-01-02 04:19:01 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:19:01.279000 | orchestrator | 2026-01-02 04:19:01 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:19:04.323430 | orchestrator | 2026-01-02 04:19:04 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:19:04.324634 | orchestrator | 2026-01-02 04:19:04 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:19:04.324749 | orchestrator | 2026-01-02 04:19:04 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:19:07.369444 | orchestrator | 2026-01-02 04:19:07 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:19:07.370531 | orchestrator | 2026-01-02 04:19:07 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:19:07.370554 | orchestrator | 2026-01-02 04:19:07 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:19:10.417951 | orchestrator | 2026-01-02 04:19:10 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:19:10.419452 | orchestrator | 2026-01-02 04:19:10 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:19:10.419612 | orchestrator | 2026-01-02 04:19:10 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:19:13.466134 | orchestrator | 2026-01-02 04:19:13 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:19:13.468548 | orchestrator | 2026-01-02 04:19:13 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:19:13.468578 | orchestrator | 2026-01-02 04:19:13 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:19:16.508530 | orchestrator | 2026-01-02 04:19:16 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:19:16.510234 | orchestrator | 2026-01-02 04:19:16 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:19:16.510275 | orchestrator | 2026-01-02 04:19:16 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:19:19.556275 | orchestrator | 2026-01-02 04:19:19 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:19:19.557786 | orchestrator | 2026-01-02 04:19:19 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:19:19.557887 | orchestrator | 2026-01-02 04:19:19 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:19:22.606589 | orchestrator | 2026-01-02 04:19:22 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:19:22.607941 | orchestrator | 2026-01-02 04:19:22 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:19:22.608068 | orchestrator | 2026-01-02 04:19:22 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:19:25.655347 | orchestrator | 2026-01-02 04:19:25 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:19:25.658538 | orchestrator | 2026-01-02 04:19:25 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:19:25.658594 | orchestrator | 2026-01-02 04:19:25 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:19:28.706790 | orchestrator | 2026-01-02 04:19:28 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:19:28.708720 | orchestrator | 2026-01-02 04:19:28 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:19:28.709063 | orchestrator | 2026-01-02 04:19:28 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:19:31.755764 | orchestrator | 2026-01-02 04:19:31 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:19:31.757983 | orchestrator | 2026-01-02 04:19:31 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:19:31.758139 | orchestrator | 2026-01-02 04:19:31 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:19:34.804222 | orchestrator | 2026-01-02 04:19:34 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:19:34.806569 | orchestrator | 2026-01-02 04:19:34 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:19:34.806649 | orchestrator | 2026-01-02 04:19:34 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:19:37.852413 | orchestrator | 2026-01-02 04:19:37 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:19:37.854628 | orchestrator | 2026-01-02 04:19:37 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:19:37.855044 | orchestrator | 2026-01-02 04:19:37 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:19:40.900297 | orchestrator | 2026-01-02 04:19:40 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:19:40.902621 | orchestrator | 2026-01-02 04:19:40 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:19:40.902982 | orchestrator | 2026-01-02 04:19:40 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:19:43.941479 | orchestrator | 2026-01-02 04:19:43 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:19:43.943419 | orchestrator | 2026-01-02 04:19:43 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:19:43.943527 | orchestrator | 2026-01-02 04:19:43 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:19:46.988688 | orchestrator | 2026-01-02 04:19:46 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:19:46.991890 | orchestrator | 2026-01-02 04:19:46 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:19:46.991937 | orchestrator | 2026-01-02 04:19:46 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:19:50.034350 | orchestrator | 2026-01-02 04:19:50 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:19:50.036056 | orchestrator | 2026-01-02 04:19:50 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:19:50.036245 | orchestrator | 2026-01-02 04:19:50 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:19:53.084193 | orchestrator | 2026-01-02 04:19:53 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:19:53.088306 | orchestrator | 2026-01-02 04:19:53 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:19:53.088385 | orchestrator | 2026-01-02 04:19:53 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:19:56.128082 | orchestrator | 2026-01-02 04:19:56 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:19:56.130928 | orchestrator | 2026-01-02 04:19:56 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:19:56.130964 | orchestrator | 2026-01-02 04:19:56 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:19:59.181883 | orchestrator | 2026-01-02 04:19:59 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:19:59.183490 | orchestrator | 2026-01-02 04:19:59 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:19:59.183604 | orchestrator | 2026-01-02 04:19:59 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:20:02.231100 | orchestrator | 2026-01-02 04:20:02 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:20:02.232798 | orchestrator | 2026-01-02 04:20:02 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:20:02.232868 | orchestrator | 2026-01-02 04:20:02 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:20:05.281771 | orchestrator | 2026-01-02 04:20:05 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:20:05.286395 | orchestrator | 2026-01-02 04:20:05 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:20:05.286486 | orchestrator | 2026-01-02 04:20:05 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:20:08.330522 | orchestrator | 2026-01-02 04:20:08 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:20:08.332651 | orchestrator | 2026-01-02 04:20:08 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:20:08.333241 | orchestrator | 2026-01-02 04:20:08 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:20:11.377536 | orchestrator | 2026-01-02 04:20:11 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:20:11.379193 | orchestrator | 2026-01-02 04:20:11 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:20:11.379270 | orchestrator | 2026-01-02 04:20:11 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:20:14.425207 | orchestrator | 2026-01-02 04:20:14 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:20:14.427140 | orchestrator | 2026-01-02 04:20:14 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:20:14.427567 | orchestrator | 2026-01-02 04:20:14 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:20:17.475440 | orchestrator | 2026-01-02 04:20:17 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:20:17.476744 | orchestrator | 2026-01-02 04:20:17 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:20:17.476879 | orchestrator | 2026-01-02 04:20:17 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:20:20.521692 | orchestrator | 2026-01-02 04:20:20 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:20:20.522985 | orchestrator | 2026-01-02 04:20:20 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:20:20.523162 | orchestrator | 2026-01-02 04:20:20 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:20:23.567734 | orchestrator | 2026-01-02 04:20:23 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:20:23.569108 | orchestrator | 2026-01-02 04:20:23 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:20:23.569160 | orchestrator | 2026-01-02 04:20:23 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:20:26.617211 | orchestrator | 2026-01-02 04:20:26 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:20:26.619617 | orchestrator | 2026-01-02 04:20:26 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:20:26.619770 | orchestrator | 2026-01-02 04:20:26 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:20:29.663378 | orchestrator | 2026-01-02 04:20:29 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:20:29.666587 | orchestrator | 2026-01-02 04:20:29 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:20:29.666676 | orchestrator | 2026-01-02 04:20:29 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:20:32.711998 | orchestrator | 2026-01-02 04:20:32 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:20:32.714217 | orchestrator | 2026-01-02 04:20:32 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:20:32.714297 | orchestrator | 2026-01-02 04:20:32 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:20:35.759130 | orchestrator | 2026-01-02 04:20:35 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:20:35.761075 | orchestrator | 2026-01-02 04:20:35 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:20:35.761198 | orchestrator | 2026-01-02 04:20:35 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:20:38.806178 | orchestrator | 2026-01-02 04:20:38 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:20:38.807661 | orchestrator | 2026-01-02 04:20:38 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:20:38.807717 | orchestrator | 2026-01-02 04:20:38 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:20:41.856170 | orchestrator | 2026-01-02 04:20:41 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:20:41.857637 | orchestrator | 2026-01-02 04:20:41 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:20:41.857899 | orchestrator | 2026-01-02 04:20:41 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:20:44.903522 | orchestrator | 2026-01-02 04:20:44 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:20:44.904781 | orchestrator | 2026-01-02 04:20:44 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:20:44.904859 | orchestrator | 2026-01-02 04:20:44 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:20:47.953311 | orchestrator | 2026-01-02 04:20:47 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:20:47.954856 | orchestrator | 2026-01-02 04:20:47 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:20:47.954915 | orchestrator | 2026-01-02 04:20:47 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:20:51.004061 | orchestrator | 2026-01-02 04:20:51 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:20:51.006626 | orchestrator | 2026-01-02 04:20:51 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:20:51.006699 | orchestrator | 2026-01-02 04:20:51 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:20:54.056558 | orchestrator | 2026-01-02 04:20:54 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:20:54.058649 | orchestrator | 2026-01-02 04:20:54 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:20:54.058730 | orchestrator | 2026-01-02 04:20:54 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:20:57.112114 | orchestrator | 2026-01-02 04:20:57 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:20:57.113955 | orchestrator | 2026-01-02 04:20:57 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:20:57.114306 | orchestrator | 2026-01-02 04:20:57 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:21:00.159551 | orchestrator | 2026-01-02 04:21:00 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:21:00.161289 | orchestrator | 2026-01-02 04:21:00 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:21:00.161353 | orchestrator | 2026-01-02 04:21:00 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:21:03.205990 | orchestrator | 2026-01-02 04:21:03 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:21:03.207725 | orchestrator | 2026-01-02 04:21:03 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:21:03.207881 | orchestrator | 2026-01-02 04:21:03 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:21:06.248920 | orchestrator | 2026-01-02 04:21:06 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:21:06.250995 | orchestrator | 2026-01-02 04:21:06 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:21:06.251192 | orchestrator | 2026-01-02 04:21:06 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:21:09.287772 | orchestrator | 2026-01-02 04:21:09 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:21:09.289515 | orchestrator | 2026-01-02 04:21:09 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:21:09.289578 | orchestrator | 2026-01-02 04:21:09 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:21:12.336700 | orchestrator | 2026-01-02 04:21:12 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:21:12.338595 | orchestrator | 2026-01-02 04:21:12 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:21:12.338661 | orchestrator | 2026-01-02 04:21:12 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:21:15.384328 | orchestrator | 2026-01-02 04:21:15 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:21:15.386110 | orchestrator | 2026-01-02 04:21:15 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:21:15.386314 | orchestrator | 2026-01-02 04:21:15 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:21:18.439021 | orchestrator | 2026-01-02 04:21:18 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:21:18.441591 | orchestrator | 2026-01-02 04:21:18 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:21:18.441683 | orchestrator | 2026-01-02 04:21:18 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:21:21.489555 | orchestrator | 2026-01-02 04:21:21 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:21:21.491757 | orchestrator | 2026-01-02 04:21:21 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:21:21.491848 | orchestrator | 2026-01-02 04:21:21 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:21:24.540184 | orchestrator | 2026-01-02 04:21:24 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:21:24.541742 | orchestrator | 2026-01-02 04:21:24 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:21:24.541778 | orchestrator | 2026-01-02 04:21:24 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:21:27.585869 | orchestrator | 2026-01-02 04:21:27 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:21:27.588354 | orchestrator | 2026-01-02 04:21:27 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:21:27.588413 | orchestrator | 2026-01-02 04:21:27 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:21:30.636357 | orchestrator | 2026-01-02 04:21:30 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:21:30.640486 | orchestrator | 2026-01-02 04:21:30 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:21:30.640562 | orchestrator | 2026-01-02 04:21:30 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:21:33.689269 | orchestrator | 2026-01-02 04:21:33 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:21:33.692204 | orchestrator | 2026-01-02 04:21:33 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:21:33.692365 | orchestrator | 2026-01-02 04:21:33 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:21:36.739307 | orchestrator | 2026-01-02 04:21:36 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:21:36.741918 | orchestrator | 2026-01-02 04:21:36 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:21:36.741987 | orchestrator | 2026-01-02 04:21:36 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:21:39.786009 | orchestrator | 2026-01-02 04:21:39 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:21:39.786887 | orchestrator | 2026-01-02 04:21:39 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:21:39.787022 | orchestrator | 2026-01-02 04:21:39 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:21:42.837439 | orchestrator | 2026-01-02 04:21:42 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:21:42.839504 | orchestrator | 2026-01-02 04:21:42 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:21:42.839559 | orchestrator | 2026-01-02 04:21:42 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:21:45.887894 | orchestrator | 2026-01-02 04:21:45 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:21:45.889590 | orchestrator | 2026-01-02 04:21:45 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:21:45.889646 | orchestrator | 2026-01-02 04:21:45 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:21:48.936839 | orchestrator | 2026-01-02 04:21:48 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:21:48.938224 | orchestrator | 2026-01-02 04:21:48 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:21:48.938259 | orchestrator | 2026-01-02 04:21:48 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:21:51.987296 | orchestrator | 2026-01-02 04:21:51 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:21:51.988372 | orchestrator | 2026-01-02 04:21:51 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:21:51.988409 | orchestrator | 2026-01-02 04:21:51 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:21:55.035680 | orchestrator | 2026-01-02 04:21:55 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:21:55.037723 | orchestrator | 2026-01-02 04:21:55 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:21:55.037768 | orchestrator | 2026-01-02 04:21:55 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:21:58.080257 | orchestrator | 2026-01-02 04:21:58 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:21:58.081142 | orchestrator | 2026-01-02 04:21:58 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:21:58.081240 | orchestrator | 2026-01-02 04:21:58 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:22:01.125157 | orchestrator | 2026-01-02 04:22:01 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:22:01.127067 | orchestrator | 2026-01-02 04:22:01 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:22:01.127162 | orchestrator | 2026-01-02 04:22:01 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:22:04.168759 | orchestrator | 2026-01-02 04:22:04 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:22:04.170519 | orchestrator | 2026-01-02 04:22:04 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:22:04.170561 | orchestrator | 2026-01-02 04:22:04 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:22:07.217099 | orchestrator | 2026-01-02 04:22:07 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:22:07.218968 | orchestrator | 2026-01-02 04:22:07 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:22:07.219114 | orchestrator | 2026-01-02 04:22:07 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:22:10.261983 | orchestrator | 2026-01-02 04:22:10 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:22:10.263954 | orchestrator | 2026-01-02 04:22:10 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:22:10.264210 | orchestrator | 2026-01-02 04:22:10 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:22:13.310740 | orchestrator | 2026-01-02 04:22:13 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:22:13.313719 | orchestrator | 2026-01-02 04:22:13 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:22:13.313779 | orchestrator | 2026-01-02 04:22:13 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:22:16.355391 | orchestrator | 2026-01-02 04:22:16 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:22:16.357686 | orchestrator | 2026-01-02 04:22:16 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:22:16.357769 | orchestrator | 2026-01-02 04:22:16 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:22:19.407193 | orchestrator | 2026-01-02 04:22:19 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:22:19.409002 | orchestrator | 2026-01-02 04:22:19 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:22:19.409091 | orchestrator | 2026-01-02 04:22:19 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:22:22.453758 | orchestrator | 2026-01-02 04:22:22 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:22:22.455371 | orchestrator | 2026-01-02 04:22:22 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:22:22.455548 | orchestrator | 2026-01-02 04:22:22 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:22:25.505920 | orchestrator | 2026-01-02 04:22:25 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:22:25.507544 | orchestrator | 2026-01-02 04:22:25 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:22:25.507577 | orchestrator | 2026-01-02 04:22:25 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:22:28.560174 | orchestrator | 2026-01-02 04:22:28 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:22:28.561507 | orchestrator | 2026-01-02 04:22:28 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:22:28.561600 | orchestrator | 2026-01-02 04:22:28 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:22:31.601339 | orchestrator | 2026-01-02 04:22:31 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:22:31.602155 | orchestrator | 2026-01-02 04:22:31 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:22:31.602189 | orchestrator | 2026-01-02 04:22:31 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:22:34.648319 | orchestrator | 2026-01-02 04:22:34 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:22:34.652682 | orchestrator | 2026-01-02 04:22:34 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:22:34.652750 | orchestrator | 2026-01-02 04:22:34 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:22:37.692562 | orchestrator | 2026-01-02 04:22:37 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:22:37.694362 | orchestrator | 2026-01-02 04:22:37 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:22:37.694948 | orchestrator | 2026-01-02 04:22:37 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:22:40.732327 | orchestrator | 2026-01-02 04:22:40 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:22:40.733490 | orchestrator | 2026-01-02 04:22:40 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:22:40.733540 | orchestrator | 2026-01-02 04:22:40 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:22:43.782139 | orchestrator | 2026-01-02 04:22:43 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:22:43.783705 | orchestrator | 2026-01-02 04:22:43 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:22:43.783812 | orchestrator | 2026-01-02 04:22:43 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:22:46.828323 | orchestrator | 2026-01-02 04:22:46 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:22:46.830077 | orchestrator | 2026-01-02 04:22:46 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:22:46.830188 | orchestrator | 2026-01-02 04:22:46 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:22:49.876666 | orchestrator | 2026-01-02 04:22:49 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:22:49.877947 | orchestrator | 2026-01-02 04:22:49 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:22:49.877996 | orchestrator | 2026-01-02 04:22:49 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:22:52.926136 | orchestrator | 2026-01-02 04:22:52 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:22:52.927180 | orchestrator | 2026-01-02 04:22:52 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:22:52.927214 | orchestrator | 2026-01-02 04:22:52 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:22:55.974595 | orchestrator | 2026-01-02 04:22:55 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:22:55.976322 | orchestrator | 2026-01-02 04:22:55 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:22:55.976440 | orchestrator | 2026-01-02 04:22:55 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:22:59.022689 | orchestrator | 2026-01-02 04:22:59 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:22:59.024272 | orchestrator | 2026-01-02 04:22:59 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:22:59.024309 | orchestrator | 2026-01-02 04:22:59 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:23:02.069898 | orchestrator | 2026-01-02 04:23:02 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:23:02.070010 | orchestrator | 2026-01-02 04:23:02 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:23:02.070086 | orchestrator | 2026-01-02 04:23:02 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:23:05.118367 | orchestrator | 2026-01-02 04:23:05 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:23:05.119568 | orchestrator | 2026-01-02 04:23:05 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:23:05.119670 | orchestrator | 2026-01-02 04:23:05 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:23:08.166961 | orchestrator | 2026-01-02 04:23:08 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:23:08.168360 | orchestrator | 2026-01-02 04:23:08 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:23:08.168402 | orchestrator | 2026-01-02 04:23:08 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:23:11.213967 | orchestrator | 2026-01-02 04:23:11 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:23:11.214163 | orchestrator | 2026-01-02 04:23:11 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:23:11.214181 | orchestrator | 2026-01-02 04:23:11 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:23:14.261041 | orchestrator | 2026-01-02 04:23:14 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:23:14.261825 | orchestrator | 2026-01-02 04:23:14 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:23:14.261867 | orchestrator | 2026-01-02 04:23:14 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:23:17.304984 | orchestrator | 2026-01-02 04:23:17 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:23:17.306580 | orchestrator | 2026-01-02 04:23:17 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:23:17.306723 | orchestrator | 2026-01-02 04:23:17 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:23:20.352442 | orchestrator | 2026-01-02 04:23:20 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:23:20.354398 | orchestrator | 2026-01-02 04:23:20 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:23:20.354717 | orchestrator | 2026-01-02 04:23:20 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:23:23.403011 | orchestrator | 2026-01-02 04:23:23 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:23:23.404411 | orchestrator | 2026-01-02 04:23:23 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:23:23.404490 | orchestrator | 2026-01-02 04:23:23 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:23:26.450540 | orchestrator | 2026-01-02 04:23:26 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:23:26.452733 | orchestrator | 2026-01-02 04:23:26 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:23:26.452893 | orchestrator | 2026-01-02 04:23:26 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:23:29.503282 | orchestrator | 2026-01-02 04:23:29 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:23:29.506512 | orchestrator | 2026-01-02 04:23:29 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:23:29.506636 | orchestrator | 2026-01-02 04:23:29 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:23:32.550612 | orchestrator | 2026-01-02 04:23:32 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:23:32.552925 | orchestrator | 2026-01-02 04:23:32 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:23:32.552986 | orchestrator | 2026-01-02 04:23:32 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:23:35.597378 | orchestrator | 2026-01-02 04:23:35 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:23:35.599502 | orchestrator | 2026-01-02 04:23:35 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:23:35.599740 | orchestrator | 2026-01-02 04:23:35 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:23:38.646153 | orchestrator | 2026-01-02 04:23:38 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:23:38.648211 | orchestrator | 2026-01-02 04:23:38 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:23:38.648383 | orchestrator | 2026-01-02 04:23:38 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:23:41.695361 | orchestrator | 2026-01-02 04:23:41 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:23:41.696662 | orchestrator | 2026-01-02 04:23:41 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:23:41.696777 | orchestrator | 2026-01-02 04:23:41 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:23:44.743166 | orchestrator | 2026-01-02 04:23:44 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:23:44.745948 | orchestrator | 2026-01-02 04:23:44 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:23:44.746007 | orchestrator | 2026-01-02 04:23:44 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:23:47.792017 | orchestrator | 2026-01-02 04:23:47 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:23:47.794493 | orchestrator | 2026-01-02 04:23:47 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:23:47.794592 | orchestrator | 2026-01-02 04:23:47 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:23:50.834622 | orchestrator | 2026-01-02 04:23:50 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:23:50.835939 | orchestrator | 2026-01-02 04:23:50 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:23:50.836036 | orchestrator | 2026-01-02 04:23:50 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:23:53.878113 | orchestrator | 2026-01-02 04:23:53 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:23:53.878870 | orchestrator | 2026-01-02 04:23:53 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:23:53.878907 | orchestrator | 2026-01-02 04:23:53 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:23:56.927038 | orchestrator | 2026-01-02 04:23:56 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:23:56.929930 | orchestrator | 2026-01-02 04:23:56 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:23:56.929971 | orchestrator | 2026-01-02 04:23:56 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:23:59.983909 | orchestrator | 2026-01-02 04:23:59 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:23:59.985617 | orchestrator | 2026-01-02 04:23:59 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:23:59.985676 | orchestrator | 2026-01-02 04:23:59 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:24:03.032876 | orchestrator | 2026-01-02 04:24:03 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:24:03.034138 | orchestrator | 2026-01-02 04:24:03 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:24:03.034292 | orchestrator | 2026-01-02 04:24:03 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:24:06.084076 | orchestrator | 2026-01-02 04:24:06 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:24:06.085457 | orchestrator | 2026-01-02 04:24:06 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:24:06.085603 | orchestrator | 2026-01-02 04:24:06 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:24:09.132602 | orchestrator | 2026-01-02 04:24:09 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:24:09.134483 | orchestrator | 2026-01-02 04:24:09 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:24:09.134577 | orchestrator | 2026-01-02 04:24:09 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:24:12.186918 | orchestrator | 2026-01-02 04:24:12 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:24:12.190597 | orchestrator | 2026-01-02 04:24:12 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:24:12.190837 | orchestrator | 2026-01-02 04:24:12 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:24:15.235254 | orchestrator | 2026-01-02 04:24:15 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:24:15.236315 | orchestrator | 2026-01-02 04:24:15 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:24:15.236347 | orchestrator | 2026-01-02 04:24:15 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:24:18.287554 | orchestrator | 2026-01-02 04:24:18 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:24:18.288627 | orchestrator | 2026-01-02 04:24:18 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:24:18.288659 | orchestrator | 2026-01-02 04:24:18 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:24:21.335561 | orchestrator | 2026-01-02 04:24:21 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:24:21.337815 | orchestrator | 2026-01-02 04:24:21 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:24:21.337873 | orchestrator | 2026-01-02 04:24:21 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:24:24.381406 | orchestrator | 2026-01-02 04:24:24 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:24:24.381807 | orchestrator | 2026-01-02 04:24:24 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:24:24.381841 | orchestrator | 2026-01-02 04:24:24 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:24:27.427084 | orchestrator | 2026-01-02 04:24:27 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:24:27.428532 | orchestrator | 2026-01-02 04:24:27 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:24:27.428585 | orchestrator | 2026-01-02 04:24:27 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:24:30.476835 | orchestrator | 2026-01-02 04:24:30 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:24:30.478660 | orchestrator | 2026-01-02 04:24:30 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:24:30.478742 | orchestrator | 2026-01-02 04:24:30 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:24:33.524064 | orchestrator | 2026-01-02 04:24:33 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:24:33.525164 | orchestrator | 2026-01-02 04:24:33 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:24:33.525582 | orchestrator | 2026-01-02 04:24:33 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:24:36.572532 | orchestrator | 2026-01-02 04:24:36 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:24:36.574702 | orchestrator | 2026-01-02 04:24:36 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:24:36.574790 | orchestrator | 2026-01-02 04:24:36 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:24:39.618335 | orchestrator | 2026-01-02 04:24:39 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:24:39.620458 | orchestrator | 2026-01-02 04:24:39 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:24:39.620545 | orchestrator | 2026-01-02 04:24:39 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:24:42.670223 | orchestrator | 2026-01-02 04:24:42 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:24:42.671946 | orchestrator | 2026-01-02 04:24:42 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:24:42.671985 | orchestrator | 2026-01-02 04:24:42 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:24:45.716691 | orchestrator | 2026-01-02 04:24:45 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:24:45.717318 | orchestrator | 2026-01-02 04:24:45 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:24:45.717341 | orchestrator | 2026-01-02 04:24:45 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:24:48.770192 | orchestrator | 2026-01-02 04:24:48 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:24:48.771561 | orchestrator | 2026-01-02 04:24:48 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:24:48.771633 | orchestrator | 2026-01-02 04:24:48 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:24:51.817504 | orchestrator | 2026-01-02 04:24:51 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:24:51.818601 | orchestrator | 2026-01-02 04:24:51 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:24:51.818644 | orchestrator | 2026-01-02 04:24:51 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:24:54.861791 | orchestrator | 2026-01-02 04:24:54 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:24:54.862621 | orchestrator | 2026-01-02 04:24:54 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:24:54.862691 | orchestrator | 2026-01-02 04:24:54 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:24:57.907136 | orchestrator | 2026-01-02 04:24:57 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:24:57.909141 | orchestrator | 2026-01-02 04:24:57 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:24:57.909176 | orchestrator | 2026-01-02 04:24:57 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:25:00.958886 | orchestrator | 2026-01-02 04:25:00 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:25:00.960851 | orchestrator | 2026-01-02 04:25:00 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:25:00.960925 | orchestrator | 2026-01-02 04:25:00 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:25:04.011032 | orchestrator | 2026-01-02 04:25:04 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:25:04.013461 | orchestrator | 2026-01-02 04:25:04 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:25:04.013771 | orchestrator | 2026-01-02 04:25:04 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:25:07.055847 | orchestrator | 2026-01-02 04:25:07 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:25:07.056878 | orchestrator | 2026-01-02 04:25:07 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:25:07.056963 | orchestrator | 2026-01-02 04:25:07 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:25:10.102980 | orchestrator | 2026-01-02 04:25:10 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:25:10.105451 | orchestrator | 2026-01-02 04:25:10 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:25:10.105518 | orchestrator | 2026-01-02 04:25:10 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:25:13.146701 | orchestrator | 2026-01-02 04:25:13 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:25:13.149009 | orchestrator | 2026-01-02 04:25:13 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:25:13.149136 | orchestrator | 2026-01-02 04:25:13 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:25:16.190350 | orchestrator | 2026-01-02 04:25:16 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:25:16.192497 | orchestrator | 2026-01-02 04:25:16 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:25:16.192621 | orchestrator | 2026-01-02 04:25:16 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:25:19.238544 | orchestrator | 2026-01-02 04:25:19 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:25:19.240137 | orchestrator | 2026-01-02 04:25:19 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:25:19.240219 | orchestrator | 2026-01-02 04:25:19 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:25:22.288317 | orchestrator | 2026-01-02 04:25:22 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:25:22.289867 | orchestrator | 2026-01-02 04:25:22 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:25:22.289919 | orchestrator | 2026-01-02 04:25:22 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:25:25.337863 | orchestrator | 2026-01-02 04:25:25 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:25:25.339686 | orchestrator | 2026-01-02 04:25:25 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:25:25.339802 | orchestrator | 2026-01-02 04:25:25 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:25:28.389768 | orchestrator | 2026-01-02 04:25:28 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:25:28.391379 | orchestrator | 2026-01-02 04:25:28 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:25:28.391545 | orchestrator | 2026-01-02 04:25:28 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:25:31.440804 | orchestrator | 2026-01-02 04:25:31 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:25:31.442984 | orchestrator | 2026-01-02 04:25:31 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:25:31.443058 | orchestrator | 2026-01-02 04:25:31 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:25:34.488130 | orchestrator | 2026-01-02 04:25:34 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:25:34.489506 | orchestrator | 2026-01-02 04:25:34 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:25:34.489533 | orchestrator | 2026-01-02 04:25:34 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:25:37.532915 | orchestrator | 2026-01-02 04:25:37 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:25:37.534645 | orchestrator | 2026-01-02 04:25:37 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:25:37.534692 | orchestrator | 2026-01-02 04:25:37 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:25:40.584954 | orchestrator | 2026-01-02 04:25:40 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:25:40.586539 | orchestrator | 2026-01-02 04:25:40 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:25:40.586643 | orchestrator | 2026-01-02 04:25:40 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:25:43.629127 | orchestrator | 2026-01-02 04:25:43 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:25:43.630138 | orchestrator | 2026-01-02 04:25:43 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:25:43.630827 | orchestrator | 2026-01-02 04:25:43 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:25:46.677334 | orchestrator | 2026-01-02 04:25:46 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:25:46.679567 | orchestrator | 2026-01-02 04:25:46 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:25:46.679682 | orchestrator | 2026-01-02 04:25:46 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:25:49.726932 | orchestrator | 2026-01-02 04:25:49 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:25:49.728994 | orchestrator | 2026-01-02 04:25:49 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:25:49.729397 | orchestrator | 2026-01-02 04:25:49 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:25:52.777370 | orchestrator | 2026-01-02 04:25:52 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:25:52.780242 | orchestrator | 2026-01-02 04:25:52 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:25:52.780312 | orchestrator | 2026-01-02 04:25:52 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:25:55.828197 | orchestrator | 2026-01-02 04:25:55 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:25:55.829822 | orchestrator | 2026-01-02 04:25:55 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:25:55.829884 | orchestrator | 2026-01-02 04:25:55 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:25:58.874612 | orchestrator | 2026-01-02 04:25:58 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:25:58.876767 | orchestrator | 2026-01-02 04:25:58 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:25:58.876790 | orchestrator | 2026-01-02 04:25:58 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:26:01.921789 | orchestrator | 2026-01-02 04:26:01 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:26:01.923417 | orchestrator | 2026-01-02 04:26:01 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:26:01.923598 | orchestrator | 2026-01-02 04:26:01 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:26:04.971852 | orchestrator | 2026-01-02 04:26:04 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:26:04.972809 | orchestrator | 2026-01-02 04:26:04 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:26:04.972850 | orchestrator | 2026-01-02 04:26:04 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:26:08.020864 | orchestrator | 2026-01-02 04:26:08 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:26:08.020968 | orchestrator | 2026-01-02 04:26:08 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:26:08.020983 | orchestrator | 2026-01-02 04:26:08 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:26:11.065887 | orchestrator | 2026-01-02 04:26:11 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:26:11.067368 | orchestrator | 2026-01-02 04:26:11 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:26:11.067722 | orchestrator | 2026-01-02 04:26:11 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:26:14.119453 | orchestrator | 2026-01-02 04:26:14 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:26:14.122247 | orchestrator | 2026-01-02 04:26:14 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:26:14.122400 | orchestrator | 2026-01-02 04:26:14 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:26:17.167152 | orchestrator | 2026-01-02 04:26:17 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:26:17.169401 | orchestrator | 2026-01-02 04:26:17 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:26:17.169451 | orchestrator | 2026-01-02 04:26:17 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:26:20.215154 | orchestrator | 2026-01-02 04:26:20 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:26:20.215251 | orchestrator | 2026-01-02 04:26:20 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:26:20.215330 | orchestrator | 2026-01-02 04:26:20 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:26:23.262816 | orchestrator | 2026-01-02 04:26:23 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:26:23.265336 | orchestrator | 2026-01-02 04:26:23 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:26:23.265470 | orchestrator | 2026-01-02 04:26:23 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:26:26.307025 | orchestrator | 2026-01-02 04:26:26 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:26:26.308417 | orchestrator | 2026-01-02 04:26:26 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:26:26.308495 | orchestrator | 2026-01-02 04:26:26 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:26:29.352669 | orchestrator | 2026-01-02 04:26:29 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:26:29.354429 | orchestrator | 2026-01-02 04:26:29 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:26:29.354462 | orchestrator | 2026-01-02 04:26:29 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:26:32.401901 | orchestrator | 2026-01-02 04:26:32 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:26:32.404049 | orchestrator | 2026-01-02 04:26:32 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:26:32.404114 | orchestrator | 2026-01-02 04:26:32 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:26:35.449350 | orchestrator | 2026-01-02 04:26:35 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:26:35.450899 | orchestrator | 2026-01-02 04:26:35 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:26:35.451196 | orchestrator | 2026-01-02 04:26:35 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:26:38.498120 | orchestrator | 2026-01-02 04:26:38 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:26:38.499434 | orchestrator | 2026-01-02 04:26:38 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:26:38.499708 | orchestrator | 2026-01-02 04:26:38 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:26:41.533508 | orchestrator | 2026-01-02 04:26:41 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:26:41.535778 | orchestrator | 2026-01-02 04:26:41 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:26:41.535821 | orchestrator | 2026-01-02 04:26:41 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:26:44.584979 | orchestrator | 2026-01-02 04:26:44 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:26:44.588375 | orchestrator | 2026-01-02 04:26:44 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:26:44.588430 | orchestrator | 2026-01-02 04:26:44 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:26:47.634399 | orchestrator | 2026-01-02 04:26:47 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:26:47.636551 | orchestrator | 2026-01-02 04:26:47 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:26:47.636609 | orchestrator | 2026-01-02 04:26:47 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:26:50.683888 | orchestrator | 2026-01-02 04:26:50 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:26:50.685030 | orchestrator | 2026-01-02 04:26:50 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:26:50.685108 | orchestrator | 2026-01-02 04:26:50 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:26:53.734285 | orchestrator | 2026-01-02 04:26:53 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:26:53.735811 | orchestrator | 2026-01-02 04:26:53 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:26:53.735979 | orchestrator | 2026-01-02 04:26:53 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:26:56.780351 | orchestrator | 2026-01-02 04:26:56 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:26:56.782321 | orchestrator | 2026-01-02 04:26:56 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:26:56.782376 | orchestrator | 2026-01-02 04:26:56 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:26:59.827503 | orchestrator | 2026-01-02 04:26:59 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:26:59.829844 | orchestrator | 2026-01-02 04:26:59 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:26:59.830091 | orchestrator | 2026-01-02 04:26:59 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:27:02.870482 | orchestrator | 2026-01-02 04:27:02 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:27:02.871549 | orchestrator | 2026-01-02 04:27:02 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:27:02.871608 | orchestrator | 2026-01-02 04:27:02 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:27:05.919091 | orchestrator | 2026-01-02 04:27:05 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:27:05.921049 | orchestrator | 2026-01-02 04:27:05 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:27:05.921181 | orchestrator | 2026-01-02 04:27:05 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:27:08.966127 | orchestrator | 2026-01-02 04:27:08 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:27:08.968029 | orchestrator | 2026-01-02 04:27:08 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:27:08.968334 | orchestrator | 2026-01-02 04:27:08 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:27:12.030991 | orchestrator | 2026-01-02 04:27:12 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:27:12.031061 | orchestrator | 2026-01-02 04:27:12 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:27:12.031067 | orchestrator | 2026-01-02 04:27:12 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:27:15.065711 | orchestrator | 2026-01-02 04:27:15 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:27:15.068100 | orchestrator | 2026-01-02 04:27:15 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:27:15.068549 | orchestrator | 2026-01-02 04:27:15 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:27:18.112363 | orchestrator | 2026-01-02 04:27:18 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:27:18.113803 | orchestrator | 2026-01-02 04:27:18 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:27:18.113890 | orchestrator | 2026-01-02 04:27:18 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:27:21.153867 | orchestrator | 2026-01-02 04:27:21 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:27:21.156370 | orchestrator | 2026-01-02 04:27:21 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:27:21.156476 | orchestrator | 2026-01-02 04:27:21 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:27:24.203124 | orchestrator | 2026-01-02 04:27:24 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:27:24.204946 | orchestrator | 2026-01-02 04:27:24 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:27:24.204996 | orchestrator | 2026-01-02 04:27:24 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:27:27.249169 | orchestrator | 2026-01-02 04:27:27 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:27:27.250514 | orchestrator | 2026-01-02 04:27:27 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:27:27.250722 | orchestrator | 2026-01-02 04:27:27 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:27:30.300232 | orchestrator | 2026-01-02 04:27:30 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:27:30.301554 | orchestrator | 2026-01-02 04:27:30 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:27:30.301608 | orchestrator | 2026-01-02 04:27:30 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:27:33.342154 | orchestrator | 2026-01-02 04:27:33 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:27:33.344895 | orchestrator | 2026-01-02 04:27:33 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:27:33.344930 | orchestrator | 2026-01-02 04:27:33 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:27:36.389154 | orchestrator | 2026-01-02 04:27:36 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:27:36.391139 | orchestrator | 2026-01-02 04:27:36 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:27:36.391244 | orchestrator | 2026-01-02 04:27:36 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:27:39.437736 | orchestrator | 2026-01-02 04:27:39 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:27:39.439587 | orchestrator | 2026-01-02 04:27:39 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:27:39.439632 | orchestrator | 2026-01-02 04:27:39 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:27:42.485563 | orchestrator | 2026-01-02 04:27:42 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:27:42.487513 | orchestrator | 2026-01-02 04:27:42 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:27:42.488457 | orchestrator | 2026-01-02 04:27:42 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:27:45.532622 | orchestrator | 2026-01-02 04:27:45 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:27:45.533873 | orchestrator | 2026-01-02 04:27:45 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:27:45.533914 | orchestrator | 2026-01-02 04:27:45 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:27:48.583333 | orchestrator | 2026-01-02 04:27:48 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:27:48.584889 | orchestrator | 2026-01-02 04:27:48 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:27:48.584924 | orchestrator | 2026-01-02 04:27:48 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:27:51.630886 | orchestrator | 2026-01-02 04:27:51 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:27:51.633116 | orchestrator | 2026-01-02 04:27:51 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:27:51.633153 | orchestrator | 2026-01-02 04:27:51 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:27:54.674221 | orchestrator | 2026-01-02 04:27:54 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:27:54.674473 | orchestrator | 2026-01-02 04:27:54 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:27:54.674509 | orchestrator | 2026-01-02 04:27:54 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:27:57.723475 | orchestrator | 2026-01-02 04:27:57 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:27:57.725080 | orchestrator | 2026-01-02 04:27:57 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:27:57.725116 | orchestrator | 2026-01-02 04:27:57 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:28:00.767713 | orchestrator | 2026-01-02 04:28:00 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:28:00.769518 | orchestrator | 2026-01-02 04:28:00 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:28:00.769569 | orchestrator | 2026-01-02 04:28:00 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:28:03.818726 | orchestrator | 2026-01-02 04:28:03 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:28:03.820772 | orchestrator | 2026-01-02 04:28:03 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:28:03.820825 | orchestrator | 2026-01-02 04:28:03 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:28:06.866269 | orchestrator | 2026-01-02 04:28:06 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:28:06.870334 | orchestrator | 2026-01-02 04:28:06 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:28:06.870422 | orchestrator | 2026-01-02 04:28:06 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:28:09.919180 | orchestrator | 2026-01-02 04:28:09 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:28:09.921961 | orchestrator | 2026-01-02 04:28:09 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:28:09.922130 | orchestrator | 2026-01-02 04:28:09 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:28:12.964257 | orchestrator | 2026-01-02 04:28:12 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:28:12.964822 | orchestrator | 2026-01-02 04:28:12 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:28:12.964851 | orchestrator | 2026-01-02 04:28:12 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:28:16.012018 | orchestrator | 2026-01-02 04:28:16 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:28:16.014332 | orchestrator | 2026-01-02 04:28:16 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:28:16.014419 | orchestrator | 2026-01-02 04:28:16 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:28:19.057562 | orchestrator | 2026-01-02 04:28:19 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:28:19.059154 | orchestrator | 2026-01-02 04:28:19 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:28:19.059263 | orchestrator | 2026-01-02 04:28:19 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:28:22.098607 | orchestrator | 2026-01-02 04:28:22 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:28:22.098876 | orchestrator | 2026-01-02 04:28:22 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:28:22.098934 | orchestrator | 2026-01-02 04:28:22 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:28:25.149264 | orchestrator | 2026-01-02 04:28:25 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:28:25.150763 | orchestrator | 2026-01-02 04:28:25 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:28:25.150810 | orchestrator | 2026-01-02 04:28:25 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:28:28.195043 | orchestrator | 2026-01-02 04:28:28 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:28:28.197440 | orchestrator | 2026-01-02 04:28:28 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:28:28.197492 | orchestrator | 2026-01-02 04:28:28 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:28:31.238792 | orchestrator | 2026-01-02 04:28:31 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:28:31.240110 | orchestrator | 2026-01-02 04:28:31 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:28:31.240189 | orchestrator | 2026-01-02 04:28:31 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:28:34.284654 | orchestrator | 2026-01-02 04:28:34 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:28:34.285954 | orchestrator | 2026-01-02 04:28:34 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:28:34.286101 | orchestrator | 2026-01-02 04:28:34 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:28:37.329342 | orchestrator | 2026-01-02 04:28:37 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:28:37.331072 | orchestrator | 2026-01-02 04:28:37 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:28:37.331142 | orchestrator | 2026-01-02 04:28:37 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:28:40.379810 | orchestrator | 2026-01-02 04:28:40 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:28:40.381805 | orchestrator | 2026-01-02 04:28:40 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:28:40.381895 | orchestrator | 2026-01-02 04:28:40 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:28:43.426496 | orchestrator | 2026-01-02 04:28:43 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:28:43.428990 | orchestrator | 2026-01-02 04:28:43 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:28:43.429190 | orchestrator | 2026-01-02 04:28:43 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:28:46.486813 | orchestrator | 2026-01-02 04:28:46 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:28:46.488878 | orchestrator | 2026-01-02 04:28:46 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:28:46.488941 | orchestrator | 2026-01-02 04:28:46 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:28:49.530667 | orchestrator | 2026-01-02 04:28:49 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:28:49.531543 | orchestrator | 2026-01-02 04:28:49 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:28:49.531675 | orchestrator | 2026-01-02 04:28:49 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:28:52.576366 | orchestrator | 2026-01-02 04:28:52 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:28:52.579607 | orchestrator | 2026-01-02 04:28:52 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:28:52.579741 | orchestrator | 2026-01-02 04:28:52 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:28:55.627567 | orchestrator | 2026-01-02 04:28:55 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:28:55.629344 | orchestrator | 2026-01-02 04:28:55 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:28:55.629455 | orchestrator | 2026-01-02 04:28:55 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:28:58.678334 | orchestrator | 2026-01-02 04:28:58 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:28:58.680407 | orchestrator | 2026-01-02 04:28:58 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:28:58.680487 | orchestrator | 2026-01-02 04:28:58 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:29:01.726427 | orchestrator | 2026-01-02 04:29:01 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:29:01.728615 | orchestrator | 2026-01-02 04:29:01 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:29:01.728869 | orchestrator | 2026-01-02 04:29:01 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:29:04.769577 | orchestrator | 2026-01-02 04:29:04 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:29:04.771996 | orchestrator | 2026-01-02 04:29:04 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:29:04.772079 | orchestrator | 2026-01-02 04:29:04 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:29:07.815656 | orchestrator | 2026-01-02 04:29:07 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:29:07.817392 | orchestrator | 2026-01-02 04:29:07 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:29:07.817611 | orchestrator | 2026-01-02 04:29:07 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:29:10.863721 | orchestrator | 2026-01-02 04:29:10 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:29:10.866096 | orchestrator | 2026-01-02 04:29:10 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:29:10.866143 | orchestrator | 2026-01-02 04:29:10 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:29:13.917342 | orchestrator | 2026-01-02 04:29:13 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:29:13.919439 | orchestrator | 2026-01-02 04:29:13 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:29:13.919656 | orchestrator | 2026-01-02 04:29:13 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:29:16.970134 | orchestrator | 2026-01-02 04:29:16 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:29:16.971552 | orchestrator | 2026-01-02 04:29:16 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:29:16.971589 | orchestrator | 2026-01-02 04:29:16 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:29:20.013035 | orchestrator | 2026-01-02 04:29:20 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:29:20.014474 | orchestrator | 2026-01-02 04:29:20 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:29:20.014518 | orchestrator | 2026-01-02 04:29:20 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:29:23.057110 | orchestrator | 2026-01-02 04:29:23 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:29:23.058488 | orchestrator | 2026-01-02 04:29:23 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:29:23.058527 | orchestrator | 2026-01-02 04:29:23 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:29:26.105522 | orchestrator | 2026-01-02 04:29:26 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:29:26.106749 | orchestrator | 2026-01-02 04:29:26 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:29:26.106788 | orchestrator | 2026-01-02 04:29:26 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:29:29.154344 | orchestrator | 2026-01-02 04:29:29 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:29:29.156887 | orchestrator | 2026-01-02 04:29:29 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:29:29.156999 | orchestrator | 2026-01-02 04:29:29 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:29:32.206226 | orchestrator | 2026-01-02 04:29:32 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:29:32.208749 | orchestrator | 2026-01-02 04:29:32 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:29:32.208806 | orchestrator | 2026-01-02 04:29:32 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:29:35.251461 | orchestrator | 2026-01-02 04:29:35 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:29:35.254879 | orchestrator | 2026-01-02 04:29:35 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:29:35.254940 | orchestrator | 2026-01-02 04:29:35 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:29:38.298830 | orchestrator | 2026-01-02 04:29:38 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:29:38.299209 | orchestrator | 2026-01-02 04:29:38 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:29:38.299247 | orchestrator | 2026-01-02 04:29:38 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:29:41.344784 | orchestrator | 2026-01-02 04:29:41 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:29:41.345892 | orchestrator | 2026-01-02 04:29:41 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:29:41.345937 | orchestrator | 2026-01-02 04:29:41 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:29:44.389481 | orchestrator | 2026-01-02 04:29:44 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:29:44.390770 | orchestrator | 2026-01-02 04:29:44 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:29:44.390808 | orchestrator | 2026-01-02 04:29:44 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:29:47.437722 | orchestrator | 2026-01-02 04:29:47 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:29:47.439352 | orchestrator | 2026-01-02 04:29:47 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:29:47.441369 | orchestrator | 2026-01-02 04:29:47 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:29:50.486169 | orchestrator | 2026-01-02 04:29:50 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:29:50.487205 | orchestrator | 2026-01-02 04:29:50 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:29:50.487287 | orchestrator | 2026-01-02 04:29:50 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:29:53.532928 | orchestrator | 2026-01-02 04:29:53 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:29:53.534566 | orchestrator | 2026-01-02 04:29:53 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:29:53.534723 | orchestrator | 2026-01-02 04:29:53 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:29:56.580931 | orchestrator | 2026-01-02 04:29:56 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:29:56.582458 | orchestrator | 2026-01-02 04:29:56 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:29:56.582532 | orchestrator | 2026-01-02 04:29:56 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:29:59.632631 | orchestrator | 2026-01-02 04:29:59 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:29:59.634953 | orchestrator | 2026-01-02 04:29:59 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:29:59.635296 | orchestrator | 2026-01-02 04:29:59 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:30:02.683777 | orchestrator | 2026-01-02 04:30:02 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:30:02.686206 | orchestrator | 2026-01-02 04:30:02 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:30:02.686302 | orchestrator | 2026-01-02 04:30:02 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:30:05.735742 | orchestrator | 2026-01-02 04:30:05 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:30:05.738010 | orchestrator | 2026-01-02 04:30:05 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:30:05.738117 | orchestrator | 2026-01-02 04:30:05 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:30:08.783296 | orchestrator | 2026-01-02 04:30:08 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:30:08.783971 | orchestrator | 2026-01-02 04:30:08 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:30:08.784014 | orchestrator | 2026-01-02 04:30:08 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:30:11.829698 | orchestrator | 2026-01-02 04:30:11 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:30:11.831217 | orchestrator | 2026-01-02 04:30:11 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:30:11.831259 | orchestrator | 2026-01-02 04:30:11 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:30:14.878570 | orchestrator | 2026-01-02 04:30:14 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:30:14.880335 | orchestrator | 2026-01-02 04:30:14 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:30:14.880378 | orchestrator | 2026-01-02 04:30:14 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:30:17.920029 | orchestrator | 2026-01-02 04:30:17 | INFO  | Task d499df3b-4e1e-4442-abbb-f8a3d2c1ca79 is in state STARTED 2026-01-02 04:30:17.920433 | orchestrator | 2026-01-02 04:30:17 | INFO  | Task 98c74331-4fb1-4caa-8a2a-6f826991d305 is in state STARTED 2026-01-02 04:30:17.920465 | orchestrator | 2026-01-02 04:30:17 | INFO  | Wait 1 second(s) until the next check 2026-01-02 04:30:21.172276 | RUN END RESULT_TIMED_OUT: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2026-01-02 04:30:21.175352 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-01-02 04:30:21.955639 | 2026-01-02 04:30:21.955877 | PLAY [Post output play] 2026-01-02 04:30:21.974564 | 2026-01-02 04:30:21.974729 | LOOP [stage-output : Register sources] 2026-01-02 04:30:22.046188 | 2026-01-02 04:30:22.046534 | TASK [stage-output : Check sudo] 2026-01-02 04:30:22.982507 | orchestrator | sudo: a password is required 2026-01-02 04:30:23.098724 | orchestrator | ok: Runtime: 0:00:00.014046 2026-01-02 04:30:23.115483 | 2026-01-02 04:30:23.115649 | LOOP [stage-output : Set source and destination for files and folders] 2026-01-02 04:30:23.158529 | 2026-01-02 04:30:23.158900 | TASK [stage-output : Build a list of source, dest dictionaries] 2026-01-02 04:30:23.227871 | orchestrator | ok 2026-01-02 04:30:23.236551 | 2026-01-02 04:30:23.236701 | LOOP [stage-output : Ensure target folders exist] 2026-01-02 04:30:23.706354 | orchestrator | ok: "docs" 2026-01-02 04:30:23.706625 | 2026-01-02 04:30:23.975485 | orchestrator | ok: "artifacts" 2026-01-02 04:30:24.231337 | orchestrator | ok: "logs" 2026-01-02 04:30:24.250203 | 2026-01-02 04:30:24.250410 | LOOP [stage-output : Copy files and folders to staging folder] 2026-01-02 04:30:24.287929 | 2026-01-02 04:30:24.288244 | TASK [stage-output : Make all log files readable] 2026-01-02 04:30:24.594200 | orchestrator | ok 2026-01-02 04:30:24.605979 | 2026-01-02 04:30:24.606277 | TASK [stage-output : Rename log files that match extensions_to_txt] 2026-01-02 04:30:24.654361 | orchestrator | skipping: Conditional result was False 2026-01-02 04:30:24.670717 | 2026-01-02 04:30:24.670926 | TASK [stage-output : Discover log files for compression] 2026-01-02 04:30:24.696607 | orchestrator | skipping: Conditional result was False 2026-01-02 04:30:24.712004 | 2026-01-02 04:30:24.712183 | LOOP [stage-output : Archive everything from logs] 2026-01-02 04:30:24.769511 | 2026-01-02 04:30:24.769714 | PLAY [Post cleanup play] 2026-01-02 04:30:24.781842 | 2026-01-02 04:30:24.782053 | TASK [Set cloud fact (Zuul deployment)] 2026-01-02 04:30:24.841845 | orchestrator | ok 2026-01-02 04:30:24.856447 | 2026-01-02 04:30:24.856591 | TASK [Set cloud fact (local deployment)] 2026-01-02 04:30:24.881581 | orchestrator | skipping: Conditional result was False 2026-01-02 04:30:24.889752 | 2026-01-02 04:30:24.889871 | TASK [Clean the cloud environment] 2026-01-02 04:30:26.052730 | orchestrator | 2026-01-02 04:30:26 - clean up servers 2026-01-02 04:30:26.926866 | orchestrator | 2026-01-02 04:30:26 - testbed-manager 2026-01-02 04:30:27.007302 | orchestrator | 2026-01-02 04:30:27 - testbed-node-1 2026-01-02 04:30:27.093553 | orchestrator | 2026-01-02 04:30:27 - testbed-node-5 2026-01-02 04:30:27.186414 | orchestrator | 2026-01-02 04:30:27 - testbed-node-2 2026-01-02 04:30:27.282070 | orchestrator | 2026-01-02 04:30:27 - testbed-node-4 2026-01-02 04:30:27.375107 | orchestrator | 2026-01-02 04:30:27 - testbed-node-0 2026-01-02 04:30:27.470183 | orchestrator | 2026-01-02 04:30:27 - testbed-node-3 2026-01-02 04:30:27.552733 | orchestrator | 2026-01-02 04:30:27 - clean up keypairs 2026-01-02 04:30:27.569099 | orchestrator | 2026-01-02 04:30:27 - testbed 2026-01-02 04:30:27.594122 | orchestrator | 2026-01-02 04:30:27 - wait for servers to be gone 2026-01-02 04:30:38.449466 | orchestrator | 2026-01-02 04:30:38 - clean up ports 2026-01-02 04:30:38.632157 | orchestrator | 2026-01-02 04:30:38 - 1985c94b-6018-4e8c-b693-4df9fee85a72 2026-01-02 04:30:38.885733 | orchestrator | 2026-01-02 04:30:38 - 3a64d59b-a4c3-431f-a092-a77c0ea78286 2026-01-02 04:30:39.136532 | orchestrator | 2026-01-02 04:30:39 - 5aac6504-95f1-4c8c-9c86-f13aa5def162 2026-01-02 04:30:39.392571 | orchestrator | 2026-01-02 04:30:39 - 6f163c71-4666-4b40-b875-bf05dd0c645f 2026-01-02 04:30:39.804823 | orchestrator | 2026-01-02 04:30:39 - b0ec84eb-f186-440d-923c-61d5876f5b82 2026-01-02 04:30:40.034917 | orchestrator | 2026-01-02 04:30:40 - e7592525-89ea-4b7e-8149-ce80346863a3 2026-01-02 04:30:40.322525 | orchestrator | 2026-01-02 04:30:40 - fdc8f55f-19e2-4c8d-bb27-d922cc2c9944 2026-01-02 04:30:40.556714 | orchestrator | 2026-01-02 04:30:40 - clean up volumes 2026-01-02 04:30:40.706997 | orchestrator | 2026-01-02 04:30:40 - testbed-volume-1-node-base 2026-01-02 04:30:40.742702 | orchestrator | 2026-01-02 04:30:40 - testbed-volume-2-node-base 2026-01-02 04:30:40.783008 | orchestrator | 2026-01-02 04:30:40 - testbed-volume-5-node-base 2026-01-02 04:30:40.819955 | orchestrator | 2026-01-02 04:30:40 - testbed-volume-4-node-base 2026-01-02 04:30:40.861632 | orchestrator | 2026-01-02 04:30:40 - testbed-volume-3-node-base 2026-01-02 04:30:40.908981 | orchestrator | 2026-01-02 04:30:40 - testbed-volume-0-node-base 2026-01-02 04:30:40.956111 | orchestrator | 2026-01-02 04:30:40 - testbed-volume-manager-base 2026-01-02 04:30:41.007576 | orchestrator | 2026-01-02 04:30:41 - testbed-volume-5-node-5 2026-01-02 04:30:41.050320 | orchestrator | 2026-01-02 04:30:41 - testbed-volume-3-node-3 2026-01-02 04:30:41.101287 | orchestrator | 2026-01-02 04:30:41 - testbed-volume-8-node-5 2026-01-02 04:30:41.146664 | orchestrator | 2026-01-02 04:30:41 - testbed-volume-4-node-4 2026-01-02 04:30:41.199891 | orchestrator | 2026-01-02 04:30:41 - testbed-volume-6-node-3 2026-01-02 04:30:41.247502 | orchestrator | 2026-01-02 04:30:41 - testbed-volume-0-node-3 2026-01-02 04:30:41.297665 | orchestrator | 2026-01-02 04:30:41 - testbed-volume-7-node-4 2026-01-02 04:30:41.344998 | orchestrator | 2026-01-02 04:30:41 - testbed-volume-2-node-5 2026-01-02 04:30:41.390219 | orchestrator | 2026-01-02 04:30:41 - testbed-volume-1-node-4 2026-01-02 04:30:41.439082 | orchestrator | 2026-01-02 04:30:41 - disconnect routers 2026-01-02 04:30:41.574428 | orchestrator | 2026-01-02 04:30:41 - testbed 2026-01-02 04:30:42.580048 | orchestrator | 2026-01-02 04:30:42 - clean up subnets 2026-01-02 04:30:42.632557 | orchestrator | 2026-01-02 04:30:42 - subnet-testbed-management 2026-01-02 04:30:42.808793 | orchestrator | 2026-01-02 04:30:42 - clean up networks 2026-01-02 04:30:43.022091 | orchestrator | 2026-01-02 04:30:43 - net-testbed-management 2026-01-02 04:30:43.371391 | orchestrator | 2026-01-02 04:30:43 - clean up security groups 2026-01-02 04:30:43.408209 | orchestrator | 2026-01-02 04:30:43 - testbed-node 2026-01-02 04:30:44.056754 | orchestrator | 2026-01-02 04:30:44 - testbed-management 2026-01-02 04:30:44.166130 | orchestrator | 2026-01-02 04:30:44 - clean up floating ips 2026-01-02 04:30:44.201114 | orchestrator | 2026-01-02 04:30:44 - 81.163.193.159 2026-01-02 04:30:44.571165 | orchestrator | 2026-01-02 04:30:44 - clean up routers 2026-01-02 04:30:44.712578 | orchestrator | 2026-01-02 04:30:44 - testbed 2026-01-02 04:30:45.943693 | orchestrator | ok: Runtime: 0:00:20.479541 2026-01-02 04:30:45.947091 | 2026-01-02 04:30:45.947221 | PLAY RECAP 2026-01-02 04:30:45.947307 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2026-01-02 04:30:45.947349 | 2026-01-02 04:30:46.115908 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-01-02 04:30:46.117851 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-01-02 04:30:47.043773 | 2026-01-02 04:30:47.043968 | PLAY [Cleanup play] 2026-01-02 04:30:47.063163 | 2026-01-02 04:30:47.063314 | TASK [Set cloud fact (Zuul deployment)] 2026-01-02 04:30:47.118441 | orchestrator | ok 2026-01-02 04:30:47.127325 | 2026-01-02 04:30:47.127514 | TASK [Set cloud fact (local deployment)] 2026-01-02 04:30:47.162047 | orchestrator | skipping: Conditional result was False 2026-01-02 04:30:47.171401 | 2026-01-02 04:30:47.171531 | TASK [Clean the cloud environment] 2026-01-02 04:30:48.395411 | orchestrator | 2026-01-02 04:30:48 - clean up servers 2026-01-02 04:30:49.026549 | orchestrator | 2026-01-02 04:30:49 - clean up keypairs 2026-01-02 04:30:49.044288 | orchestrator | 2026-01-02 04:30:49 - wait for servers to be gone 2026-01-02 04:30:49.094261 | orchestrator | 2026-01-02 04:30:49 - clean up ports 2026-01-02 04:30:49.175773 | orchestrator | 2026-01-02 04:30:49 - clean up volumes 2026-01-02 04:30:49.244223 | orchestrator | 2026-01-02 04:30:49 - disconnect routers 2026-01-02 04:30:49.266459 | orchestrator | 2026-01-02 04:30:49 - clean up subnets 2026-01-02 04:30:49.290511 | orchestrator | 2026-01-02 04:30:49 - clean up networks 2026-01-02 04:30:49.487222 | orchestrator | 2026-01-02 04:30:49 - clean up security groups 2026-01-02 04:30:49.535889 | orchestrator | 2026-01-02 04:30:49 - clean up floating ips 2026-01-02 04:30:49.568506 | orchestrator | 2026-01-02 04:30:49 - clean up routers 2026-01-02 04:30:49.736805 | orchestrator | ok: Runtime: 0:00:01.597020 2026-01-02 04:30:49.739384 | 2026-01-02 04:30:49.739512 | PLAY RECAP 2026-01-02 04:30:49.739595 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2026-01-02 04:30:49.739641 | 2026-01-02 04:30:49.913397 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-01-02 04:30:49.915733 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-01-02 04:30:50.758553 | 2026-01-02 04:30:50.758732 | PLAY [Base post-fetch] 2026-01-02 04:30:50.775216 | 2026-01-02 04:30:50.775374 | TASK [fetch-output : Set log path for multiple nodes] 2026-01-02 04:30:50.841734 | orchestrator | skipping: Conditional result was False 2026-01-02 04:30:50.857160 | 2026-01-02 04:30:50.857401 | TASK [fetch-output : Set log path for single node] 2026-01-02 04:30:50.912647 | orchestrator | ok 2026-01-02 04:30:50.922748 | 2026-01-02 04:30:50.922930 | LOOP [fetch-output : Ensure local output dirs] 2026-01-02 04:30:51.470369 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/96a496e8a76b473f95201e7dbfdd1770/work/logs" 2026-01-02 04:30:51.772061 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/96a496e8a76b473f95201e7dbfdd1770/work/artifacts" 2026-01-02 04:30:52.072307 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/96a496e8a76b473f95201e7dbfdd1770/work/docs" 2026-01-02 04:30:52.093365 | 2026-01-02 04:30:52.093505 | LOOP [fetch-output : Collect logs, artifacts and docs] 2026-01-02 04:30:53.148887 | orchestrator | changed: .d..t...... ./ 2026-01-02 04:30:53.149188 | orchestrator | changed: All items complete 2026-01-02 04:30:53.149229 | 2026-01-02 04:30:53.887382 | orchestrator | changed: .d..t...... ./ 2026-01-02 04:30:54.696652 | orchestrator | changed: .d..t...... ./ 2026-01-02 04:30:54.725042 | 2026-01-02 04:30:54.725212 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2026-01-02 04:30:54.757773 | orchestrator | skipping: Conditional result was False 2026-01-02 04:30:54.760438 | orchestrator | skipping: Conditional result was False 2026-01-02 04:30:54.779176 | 2026-01-02 04:30:54.779306 | PLAY RECAP 2026-01-02 04:30:54.779385 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2026-01-02 04:30:54.779424 | 2026-01-02 04:30:54.921348 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-01-02 04:30:54.923378 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-01-02 04:30:55.763757 | 2026-01-02 04:30:55.763956 | PLAY [Base post] 2026-01-02 04:30:55.780239 | 2026-01-02 04:30:55.780422 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2026-01-02 04:30:56.827792 | orchestrator | changed 2026-01-02 04:30:56.835506 | 2026-01-02 04:30:56.835629 | PLAY RECAP 2026-01-02 04:30:56.835696 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2026-01-02 04:30:56.835760 | 2026-01-02 04:30:56.969154 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-01-02 04:30:56.970232 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2026-01-02 04:30:57.819057 | 2026-01-02 04:30:57.819279 | PLAY [Base post-logs] 2026-01-02 04:30:57.831147 | 2026-01-02 04:30:57.831313 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2026-01-02 04:30:58.331465 | localhost | changed 2026-01-02 04:30:58.349475 | 2026-01-02 04:30:58.349722 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2026-01-02 04:30:58.377151 | localhost | ok 2026-01-02 04:30:58.381328 | 2026-01-02 04:30:58.381487 | TASK [Set zuul-log-path fact] 2026-01-02 04:30:58.409628 | localhost | ok 2026-01-02 04:30:58.426594 | 2026-01-02 04:30:58.426766 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-01-02 04:30:58.466412 | localhost | ok 2026-01-02 04:30:58.473209 | 2026-01-02 04:30:58.473385 | TASK [upload-logs : Create log directories] 2026-01-02 04:30:59.005453 | localhost | changed 2026-01-02 04:30:59.009432 | 2026-01-02 04:30:59.009584 | TASK [upload-logs : Ensure logs are readable before uploading] 2026-01-02 04:30:59.558616 | localhost -> localhost | ok: Runtime: 0:00:00.008500 2026-01-02 04:30:59.563256 | 2026-01-02 04:30:59.563400 | TASK [upload-logs : Upload logs to log server] 2026-01-02 04:31:00.155324 | localhost | Output suppressed because no_log was given 2026-01-02 04:31:00.157161 | 2026-01-02 04:31:00.157273 | LOOP [upload-logs : Compress console log and json output] 2026-01-02 04:31:00.213654 | localhost | skipping: Conditional result was False 2026-01-02 04:31:00.220456 | localhost | skipping: Conditional result was False 2026-01-02 04:31:00.231880 | 2026-01-02 04:31:00.232021 | LOOP [upload-logs : Upload compressed console log and json output] 2026-01-02 04:31:00.278220 | localhost | skipping: Conditional result was False 2026-01-02 04:31:00.278653 | 2026-01-02 04:31:00.284112 | localhost | skipping: Conditional result was False 2026-01-02 04:31:00.290811 | 2026-01-02 04:31:00.291067 | LOOP [upload-logs : Upload console log and json output]